00:00:00.000 Started by upstream project "autotest-per-patch" build number 132819 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.107 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.108 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.110 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.160 Fetching changes from the remote Git repository 00:00:00.162 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.202 Using shallow fetch with depth 1 00:00:00.202 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.202 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.262 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.262 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.476 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.491 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.503 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.503 > git config core.sparsecheckout # timeout=10 00:00:07.515 > git read-tree -mu HEAD # timeout=10 00:00:07.533 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.554 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.554 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.644 [Pipeline] Start of Pipeline 00:00:07.658 [Pipeline] library 00:00:07.659 Loading library shm_lib@master 00:00:07.659 Library shm_lib@master is cached. Copying from home. 00:00:07.676 [Pipeline] node 00:00:07.687 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.689 [Pipeline] { 00:00:07.700 [Pipeline] catchError 00:00:07.701 [Pipeline] { 00:00:07.711 [Pipeline] wrap 00:00:07.717 [Pipeline] { 00:00:07.722 [Pipeline] stage 00:00:07.723 [Pipeline] { (Prologue) 00:00:07.927 [Pipeline] sh 00:00:08.213 + logger -p user.info -t JENKINS-CI 00:00:08.228 [Pipeline] echo 00:00:08.229 Node: WFP4 00:00:08.234 [Pipeline] sh 00:00:08.533 [Pipeline] setCustomBuildProperty 00:00:08.543 [Pipeline] echo 00:00:08.544 Cleanup processes 00:00:08.547 [Pipeline] sh 00:00:08.827 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.827 3978240 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.840 [Pipeline] sh 00:00:09.126 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.126 ++ grep -v 'sudo pgrep' 00:00:09.127 ++ awk '{print $1}' 00:00:09.127 + sudo kill -9 00:00:09.127 + true 00:00:09.142 [Pipeline] cleanWs 00:00:09.152 [WS-CLEANUP] Deleting project workspace... 00:00:09.152 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.159 [WS-CLEANUP] done 00:00:09.164 [Pipeline] setCustomBuildProperty 00:00:09.180 [Pipeline] sh 00:00:09.464 + sudo git config --global --replace-all safe.directory '*' 00:00:09.567 [Pipeline] httpRequest 00:00:09.963 [Pipeline] echo 00:00:09.965 Sorcerer 10.211.164.112 is alive 00:00:09.974 [Pipeline] retry 00:00:09.976 [Pipeline] { 00:00:09.992 [Pipeline] httpRequest 00:00:09.997 HttpMethod: GET 00:00:09.997 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.998 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.009 Response Code: HTTP/1.1 200 OK 00:00:10.009 Success: Status code 200 is in the accepted range: 200,404 00:00:10.010 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.876 [Pipeline] } 00:00:13.893 [Pipeline] // retry 00:00:13.901 [Pipeline] sh 00:00:14.186 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.202 [Pipeline] httpRequest 00:00:14.600 [Pipeline] echo 00:00:14.601 Sorcerer 10.211.164.112 is alive 00:00:14.610 [Pipeline] retry 00:00:14.612 [Pipeline] { 00:00:14.626 [Pipeline] httpRequest 00:00:14.630 HttpMethod: GET 00:00:14.631 URL: http://10.211.164.112/packages/spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:00:14.631 Sending request to url: http://10.211.164.112/packages/spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:00:14.650 Response Code: HTTP/1.1 200 OK 00:00:14.650 Success: Status code 200 is in the accepted range: 200,404 00:00:14.651 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:01:49.823 [Pipeline] } 00:01:49.840 [Pipeline] // retry 00:01:49.848 [Pipeline] sh 00:01:50.133 + tar --no-same-owner -xf spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:01:52.683 [Pipeline] sh 00:01:52.970 + git -C spdk log --oneline -n5 00:01:52.970 1ae735a5d nvme: add poll_group interrupt callback 00:01:52.970 f80471632 nvme: add spdk_nvme_poll_group_get_fd_group() 00:01:52.970 969b360d9 thread: fd_group-based interrupts 00:01:52.970 851f166ec thread: move interrupt allocation to a function 00:01:52.970 c12cb8fe3 util: add method for setting fd_group's wrapper 00:01:52.980 [Pipeline] } 00:01:52.994 [Pipeline] // stage 00:01:53.003 [Pipeline] stage 00:01:53.004 [Pipeline] { (Prepare) 00:01:53.018 [Pipeline] writeFile 00:01:53.033 [Pipeline] sh 00:01:53.317 + logger -p user.info -t JENKINS-CI 00:01:53.330 [Pipeline] sh 00:01:53.614 + logger -p user.info -t JENKINS-CI 00:01:53.626 [Pipeline] sh 00:01:53.909 + cat autorun-spdk.conf 00:01:53.909 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.909 SPDK_TEST_NVMF=1 00:01:53.909 SPDK_TEST_NVME_CLI=1 00:01:53.909 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.909 SPDK_TEST_NVMF_NICS=e810 00:01:53.909 SPDK_TEST_VFIOUSER=1 00:01:53.909 SPDK_RUN_UBSAN=1 00:01:53.909 NET_TYPE=phy 00:01:53.917 RUN_NIGHTLY=0 00:01:53.921 [Pipeline] readFile 00:01:53.945 [Pipeline] withEnv 00:01:53.947 [Pipeline] { 00:01:53.959 [Pipeline] sh 00:01:54.243 + set -ex 00:01:54.243 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:54.243 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:54.243 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.243 ++ SPDK_TEST_NVMF=1 00:01:54.243 ++ SPDK_TEST_NVME_CLI=1 00:01:54.243 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:54.243 ++ SPDK_TEST_NVMF_NICS=e810 00:01:54.243 ++ SPDK_TEST_VFIOUSER=1 00:01:54.243 ++ SPDK_RUN_UBSAN=1 00:01:54.243 ++ NET_TYPE=phy 00:01:54.243 ++ RUN_NIGHTLY=0 00:01:54.243 + case $SPDK_TEST_NVMF_NICS in 00:01:54.243 + DRIVERS=ice 00:01:54.243 + [[ tcp == \r\d\m\a ]] 00:01:54.243 + [[ -n ice ]] 00:01:54.243 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:54.243 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:54.243 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:54.243 rmmod: ERROR: Module i40iw is not currently loaded 00:01:54.243 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:54.243 + true 00:01:54.243 + for D in $DRIVERS 00:01:54.243 + sudo modprobe ice 00:01:54.243 + exit 0 00:01:54.252 [Pipeline] } 00:01:54.267 [Pipeline] // withEnv 00:01:54.271 [Pipeline] } 00:01:54.285 [Pipeline] // stage 00:01:54.294 [Pipeline] catchError 00:01:54.296 [Pipeline] { 00:01:54.309 [Pipeline] timeout 00:01:54.309 Timeout set to expire in 1 hr 0 min 00:01:54.311 [Pipeline] { 00:01:54.325 [Pipeline] stage 00:01:54.326 [Pipeline] { (Tests) 00:01:54.340 [Pipeline] sh 00:01:54.692 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.692 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.692 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.692 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:54.692 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.692 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:54.692 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:54.692 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:54.692 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:54.692 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:54.692 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:54.692 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.692 + source /etc/os-release 00:01:54.692 ++ NAME='Fedora Linux' 00:01:54.692 ++ VERSION='39 (Cloud Edition)' 00:01:54.692 ++ ID=fedora 00:01:54.692 ++ VERSION_ID=39 00:01:54.692 ++ VERSION_CODENAME= 00:01:54.692 ++ PLATFORM_ID=platform:f39 00:01:54.692 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:54.692 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:54.692 ++ LOGO=fedora-logo-icon 00:01:54.692 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:54.692 ++ HOME_URL=https://fedoraproject.org/ 00:01:54.692 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:54.692 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:54.692 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:54.692 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:54.692 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:54.692 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:54.692 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:54.692 ++ SUPPORT_END=2024-11-12 00:01:54.692 ++ VARIANT='Cloud Edition' 00:01:54.692 ++ VARIANT_ID=cloud 00:01:54.692 + uname -a 00:01:54.692 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:54.692 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:57.231 Hugepages 00:01:57.231 node hugesize free / total 00:01:57.231 node0 1048576kB 0 / 0 00:01:57.231 node0 2048kB 0 / 0 00:01:57.231 node1 1048576kB 0 / 0 00:01:57.231 node1 2048kB 0 / 0 00:01:57.231 00:01:57.231 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.231 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:57.231 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:57.231 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:57.231 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:57.231 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:57.231 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:57.231 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:57.231 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:57.231 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:57.231 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:57.231 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:57.231 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:57.231 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:57.231 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:57.231 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:57.231 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:57.231 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:57.231 + rm -f /tmp/spdk-ld-path 00:01:57.231 + source autorun-spdk.conf 00:01:57.231 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.231 ++ SPDK_TEST_NVMF=1 00:01:57.231 ++ SPDK_TEST_NVME_CLI=1 00:01:57.231 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.231 ++ SPDK_TEST_NVMF_NICS=e810 00:01:57.231 ++ SPDK_TEST_VFIOUSER=1 00:01:57.231 ++ SPDK_RUN_UBSAN=1 00:01:57.231 ++ NET_TYPE=phy 00:01:57.231 ++ RUN_NIGHTLY=0 00:01:57.231 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.231 + [[ -n '' ]] 00:01:57.231 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.231 + for M in /var/spdk/build-*-manifest.txt 00:01:57.231 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:57.231 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.231 + for M in /var/spdk/build-*-manifest.txt 00:01:57.231 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.231 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.231 + for M in /var/spdk/build-*-manifest.txt 00:01:57.231 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:57.231 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.231 ++ uname 00:01:57.231 + [[ Linux == \L\i\n\u\x ]] 00:01:57.231 + sudo dmesg -T 00:01:57.231 + sudo dmesg --clear 00:01:57.231 + dmesg_pid=3979702 00:01:57.231 + [[ Fedora Linux == FreeBSD ]] 00:01:57.231 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.231 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.231 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:57.231 + sudo dmesg -Tw 00:01:57.231 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:57.231 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:57.231 + [[ -x /usr/src/fio-static/fio ]] 00:01:57.231 + export FIO_BIN=/usr/src/fio-static/fio 00:01:57.231 + FIO_BIN=/usr/src/fio-static/fio 00:01:57.231 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:57.231 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:57.231 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:57.231 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.231 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.231 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:57.231 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.231 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.231 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.495 03:48:56 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:57.495 03:48:56 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.495 03:48:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.495 03:48:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:57.495 03:48:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:57.495 03:48:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.495 03:48:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:57.495 03:48:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:57.495 03:48:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:57.495 03:48:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:57.495 03:48:56 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:57.495 03:48:56 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:57.495 03:48:56 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.495 03:48:56 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:57.495 03:48:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:57.495 03:48:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:57.495 03:48:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:57.496 03:48:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.496 03:48:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.496 03:48:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.496 03:48:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.496 03:48:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.496 03:48:56 -- paths/export.sh@5 -- $ export PATH 00:01:57.496 03:48:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.496 03:48:56 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:57.496 03:48:56 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:57.496 03:48:56 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733798936.XXXXXX 00:01:57.496 03:48:56 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733798936.05nSOr 00:01:57.496 03:48:56 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:57.496 03:48:56 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:57.496 03:48:56 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:57.496 03:48:56 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:57.496 03:48:56 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:57.496 03:48:56 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:57.496 03:48:56 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:57.496 03:48:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.496 03:48:56 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:57.496 03:48:56 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:57.496 03:48:56 -- pm/common@17 -- $ local monitor 00:01:57.496 03:48:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.496 03:48:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.496 03:48:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.496 03:48:56 -- pm/common@21 -- $ date +%s 00:01:57.496 03:48:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.496 03:48:56 -- pm/common@21 -- $ date +%s 00:01:57.496 03:48:56 -- pm/common@25 -- $ sleep 1 00:01:57.496 03:48:56 -- pm/common@21 -- $ date +%s 00:01:57.496 03:48:56 -- pm/common@21 -- $ date +%s 00:01:57.496 03:48:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733798936 00:01:57.496 03:48:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733798936 00:01:57.496 03:48:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733798936 00:01:57.496 03:48:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733798936 00:01:57.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733798936_collect-vmstat.pm.log 00:01:57.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733798936_collect-cpu-load.pm.log 00:01:57.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733798936_collect-cpu-temp.pm.log 00:01:57.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733798936_collect-bmc-pm.bmc.pm.log 00:01:58.436 03:48:57 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:58.436 03:48:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.436 03:48:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.436 03:48:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.436 03:48:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.436 Tue Dec 10 02:48:57 AM UTC 2024 00:01:58.436 03:48:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.436 v25.01-pre-320-g1ae735a5d 00:01:58.436 03:48:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:58.436 03:48:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.436 03:48:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.436 03:48:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:58.436 03:48:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:58.436 03:48:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.436 ************************************ 00:01:58.436 START TEST ubsan 00:01:58.436 ************************************ 00:01:58.695 03:48:57 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:58.695 using ubsan 00:01:58.695 00:01:58.695 real 0m0.000s 00:01:58.695 user 0m0.000s 00:01:58.695 sys 0m0.000s 00:01:58.695 03:48:57 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:58.695 03:48:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:58.695 ************************************ 00:01:58.695 END TEST ubsan 00:01:58.695 ************************************ 00:01:58.695 03:48:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:58.695 03:48:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:58.695 03:48:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:58.695 03:48:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:58.695 03:48:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:58.695 03:48:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:58.695 03:48:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:58.695 03:48:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:58.695 03:48:57 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:58.695 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:58.695 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:58.955 Using 'verbs' RDMA provider 00:02:12.112 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:24.328 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:24.328 Creating mk/config.mk...done. 00:02:24.328 Creating mk/cc.flags.mk...done. 00:02:24.328 Type 'make' to build. 00:02:24.328 03:49:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:24.328 03:49:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:24.328 03:49:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:24.328 03:49:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.328 ************************************ 00:02:24.328 START TEST make 00:02:24.328 ************************************ 00:02:24.328 03:49:23 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:24.328 make[1]: Nothing to be done for 'all'. 00:02:25.715 The Meson build system 00:02:25.715 Version: 1.5.0 00:02:25.715 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:25.715 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:25.715 Build type: native build 00:02:25.715 Project name: libvfio-user 00:02:25.715 Project version: 0.0.1 00:02:25.715 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:25.715 C linker for the host machine: cc ld.bfd 2.40-14 00:02:25.715 Host machine cpu family: x86_64 00:02:25.715 Host machine cpu: x86_64 00:02:25.715 Run-time dependency threads found: YES 00:02:25.715 Library dl found: YES 00:02:25.715 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.715 Run-time dependency json-c found: YES 0.17 00:02:25.715 Run-time dependency cmocka found: YES 1.1.7 00:02:25.715 Program pytest-3 found: NO 00:02:25.715 Program flake8 found: NO 00:02:25.715 Program misspell-fixer found: NO 00:02:25.715 Program restructuredtext-lint found: NO 00:02:25.715 Program valgrind found: YES (/usr/bin/valgrind) 00:02:25.715 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.715 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.715 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.715 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.715 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:25.715 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:25.715 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.715 Build targets in project: 8 00:02:25.715 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:25.715 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:25.715 00:02:25.715 libvfio-user 0.0.1 00:02:25.715 00:02:25.715 User defined options 00:02:25.715 buildtype : debug 00:02:25.715 default_library: shared 00:02:25.715 libdir : /usr/local/lib 00:02:25.715 00:02:25.715 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.651 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:26.652 [1/37] Compiling C object samples/null.p/null.c.o 00:02:26.652 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:26.652 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:26.652 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:26.652 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:26.652 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:26.652 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:26.652 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:26.652 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:26.652 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:26.652 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:26.652 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:26.652 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:26.652 [14/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:26.652 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:26.652 [16/37] Compiling C object samples/server.p/server.c.o 00:02:26.652 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:26.652 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:26.652 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:26.652 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:26.652 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:26.652 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:26.652 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:26.652 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:26.652 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:26.652 [26/37] Compiling C object samples/client.p/client.c.o 00:02:26.652 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:26.652 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:26.652 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:26.652 [30/37] Linking target samples/client 00:02:26.652 [31/37] Linking target test/unit_tests 00:02:26.910 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:26.910 [33/37] Linking target samples/null 00:02:26.910 [34/37] Linking target samples/lspci 00:02:26.910 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:26.910 [36/37] Linking target samples/server 00:02:26.910 [37/37] Linking target samples/gpio-pci-idio-16 00:02:26.910 INFO: autodetecting backend as ninja 00:02:26.910 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:26.910 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.477 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:27.477 ninja: no work to do. 00:02:32.747 The Meson build system 00:02:32.747 Version: 1.5.0 00:02:32.747 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:32.747 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:32.747 Build type: native build 00:02:32.747 Program cat found: YES (/usr/bin/cat) 00:02:32.747 Project name: DPDK 00:02:32.747 Project version: 24.03.0 00:02:32.747 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:32.747 C linker for the host machine: cc ld.bfd 2.40-14 00:02:32.747 Host machine cpu family: x86_64 00:02:32.747 Host machine cpu: x86_64 00:02:32.747 Message: ## Building in Developer Mode ## 00:02:32.747 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:32.747 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:32.747 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:32.747 Program python3 found: YES (/usr/bin/python3) 00:02:32.747 Program cat found: YES (/usr/bin/cat) 00:02:32.747 Compiler for C supports arguments -march=native: YES 00:02:32.747 Checking for size of "void *" : 8 00:02:32.747 Checking for size of "void *" : 8 (cached) 00:02:32.747 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:32.747 Library m found: YES 00:02:32.747 Library numa found: YES 00:02:32.747 Has header "numaif.h" : YES 00:02:32.747 Library fdt found: NO 00:02:32.747 Library execinfo found: NO 00:02:32.747 Has header "execinfo.h" : YES 00:02:32.747 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:32.747 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:32.747 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:32.747 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:32.747 Run-time dependency openssl found: YES 3.1.1 00:02:32.747 Run-time dependency libpcap found: YES 1.10.4 00:02:32.747 Has header "pcap.h" with dependency libpcap: YES 00:02:32.747 Compiler for C supports arguments -Wcast-qual: YES 00:02:32.747 Compiler for C supports arguments -Wdeprecated: YES 00:02:32.747 Compiler for C supports arguments -Wformat: YES 00:02:32.747 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:32.748 Compiler for C supports arguments -Wformat-security: NO 00:02:32.748 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:32.748 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:32.748 Compiler for C supports arguments -Wnested-externs: YES 00:02:32.748 Compiler for C supports arguments -Wold-style-definition: YES 00:02:32.748 Compiler for C supports arguments -Wpointer-arith: YES 00:02:32.748 Compiler for C supports arguments -Wsign-compare: YES 00:02:32.748 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:32.748 Compiler for C supports arguments -Wundef: YES 00:02:32.748 Compiler for C supports arguments -Wwrite-strings: YES 00:02:32.748 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:32.748 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:32.748 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:32.748 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:32.748 Program objdump found: YES (/usr/bin/objdump) 00:02:32.748 Compiler for C supports arguments -mavx512f: YES 00:02:32.748 Checking if "AVX512 checking" compiles: YES 00:02:32.748 Fetching value of define "__SSE4_2__" : 1 00:02:32.748 Fetching value of define "__AES__" : 1 00:02:32.748 Fetching value of define "__AVX__" : 1 00:02:32.748 Fetching value of define "__AVX2__" : 1 00:02:32.748 Fetching value of define "__AVX512BW__" : 1 00:02:32.748 Fetching value of define "__AVX512CD__" : 1 00:02:32.748 Fetching value of define "__AVX512DQ__" : 1 00:02:32.748 Fetching value of define "__AVX512F__" : 1 00:02:32.748 Fetching value of define "__AVX512VL__" : 1 00:02:32.748 Fetching value of define "__PCLMUL__" : 1 00:02:32.748 Fetching value of define "__RDRND__" : 1 00:02:32.748 Fetching value of define "__RDSEED__" : 1 00:02:32.748 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:32.748 Fetching value of define "__znver1__" : (undefined) 00:02:32.748 Fetching value of define "__znver2__" : (undefined) 00:02:32.748 Fetching value of define "__znver3__" : (undefined) 00:02:32.748 Fetching value of define "__znver4__" : (undefined) 00:02:32.748 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:32.748 Message: lib/log: Defining dependency "log" 00:02:32.748 Message: lib/kvargs: Defining dependency "kvargs" 00:02:32.748 Message: lib/telemetry: Defining dependency "telemetry" 00:02:32.748 Checking for function "getentropy" : NO 00:02:32.748 Message: lib/eal: Defining dependency "eal" 00:02:32.748 Message: lib/ring: Defining dependency "ring" 00:02:32.748 Message: lib/rcu: Defining dependency "rcu" 00:02:32.748 Message: lib/mempool: Defining dependency "mempool" 00:02:32.748 Message: lib/mbuf: Defining dependency "mbuf" 00:02:32.748 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:32.748 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:32.748 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:32.748 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:32.748 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:32.748 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:32.748 Compiler for C supports arguments -mpclmul: YES 00:02:32.748 Compiler for C supports arguments -maes: YES 00:02:32.748 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:32.748 Compiler for C supports arguments -mavx512bw: YES 00:02:32.748 Compiler for C supports arguments -mavx512dq: YES 00:02:32.748 Compiler for C supports arguments -mavx512vl: YES 00:02:32.748 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:32.748 Compiler for C supports arguments -mavx2: YES 00:02:32.748 Compiler for C supports arguments -mavx: YES 00:02:32.748 Message: lib/net: Defining dependency "net" 00:02:32.748 Message: lib/meter: Defining dependency "meter" 00:02:32.748 Message: lib/ethdev: Defining dependency "ethdev" 00:02:32.748 Message: lib/pci: Defining dependency "pci" 00:02:32.748 Message: lib/cmdline: Defining dependency "cmdline" 00:02:32.748 Message: lib/hash: Defining dependency "hash" 00:02:32.748 Message: lib/timer: Defining dependency "timer" 00:02:32.748 Message: lib/compressdev: Defining dependency "compressdev" 00:02:32.748 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:32.748 Message: lib/dmadev: Defining dependency "dmadev" 00:02:32.748 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:32.748 Message: lib/power: Defining dependency "power" 00:02:32.748 Message: lib/reorder: Defining dependency "reorder" 00:02:32.748 Message: lib/security: Defining dependency "security" 00:02:32.748 Has header "linux/userfaultfd.h" : YES 00:02:32.748 Has header "linux/vduse.h" : YES 00:02:32.748 Message: lib/vhost: Defining dependency "vhost" 00:02:32.748 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:32.748 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:32.748 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:32.748 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:32.748 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:32.748 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:32.748 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:32.748 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:32.748 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:32.748 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:32.748 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:32.748 Configuring doxy-api-html.conf using configuration 00:02:32.748 Configuring doxy-api-man.conf using configuration 00:02:32.748 Program mandb found: YES (/usr/bin/mandb) 00:02:32.748 Program sphinx-build found: NO 00:02:32.748 Configuring rte_build_config.h using configuration 00:02:32.748 Message: 00:02:32.748 ================= 00:02:32.748 Applications Enabled 00:02:32.748 ================= 00:02:32.748 00:02:32.748 apps: 00:02:32.748 00:02:32.748 00:02:32.748 Message: 00:02:32.748 ================= 00:02:32.748 Libraries Enabled 00:02:32.748 ================= 00:02:32.748 00:02:32.748 libs: 00:02:32.748 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:32.748 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:32.748 cryptodev, dmadev, power, reorder, security, vhost, 00:02:32.748 00:02:32.748 Message: 00:02:32.748 =============== 00:02:32.748 Drivers Enabled 00:02:32.748 =============== 00:02:32.748 00:02:32.748 common: 00:02:32.748 00:02:32.748 bus: 00:02:32.748 pci, vdev, 00:02:32.748 mempool: 00:02:32.748 ring, 00:02:32.748 dma: 00:02:32.748 00:02:32.748 net: 00:02:32.748 00:02:32.748 crypto: 00:02:32.748 00:02:32.748 compress: 00:02:32.748 00:02:32.748 vdpa: 00:02:32.748 00:02:32.748 00:02:32.748 Message: 00:02:32.748 ================= 00:02:32.748 Content Skipped 00:02:32.748 ================= 00:02:32.748 00:02:32.748 apps: 00:02:32.748 dumpcap: explicitly disabled via build config 00:02:32.748 graph: explicitly disabled via build config 00:02:32.748 pdump: explicitly disabled via build config 00:02:32.748 proc-info: explicitly disabled via build config 00:02:32.748 test-acl: explicitly disabled via build config 00:02:32.748 test-bbdev: explicitly disabled via build config 00:02:32.748 test-cmdline: explicitly disabled via build config 00:02:32.748 test-compress-perf: explicitly disabled via build config 00:02:32.748 test-crypto-perf: explicitly disabled via build config 00:02:32.748 test-dma-perf: explicitly disabled via build config 00:02:32.748 test-eventdev: explicitly disabled via build config 00:02:32.748 test-fib: explicitly disabled via build config 00:02:32.748 test-flow-perf: explicitly disabled via build config 00:02:32.748 test-gpudev: explicitly disabled via build config 00:02:32.748 test-mldev: explicitly disabled via build config 00:02:32.748 test-pipeline: explicitly disabled via build config 00:02:32.748 test-pmd: explicitly disabled via build config 00:02:32.748 test-regex: explicitly disabled via build config 00:02:32.748 test-sad: explicitly disabled via build config 00:02:32.748 test-security-perf: explicitly disabled via build config 00:02:32.748 00:02:32.748 libs: 00:02:32.748 argparse: explicitly disabled via build config 00:02:32.748 metrics: explicitly disabled via build config 00:02:32.748 acl: explicitly disabled via build config 00:02:32.748 bbdev: explicitly disabled via build config 00:02:32.748 bitratestats: explicitly disabled via build config 00:02:32.748 bpf: explicitly disabled via build config 00:02:32.748 cfgfile: explicitly disabled via build config 00:02:32.748 distributor: explicitly disabled via build config 00:02:32.748 efd: explicitly disabled via build config 00:02:32.748 eventdev: explicitly disabled via build config 00:02:32.748 dispatcher: explicitly disabled via build config 00:02:32.748 gpudev: explicitly disabled via build config 00:02:32.748 gro: explicitly disabled via build config 00:02:32.748 gso: explicitly disabled via build config 00:02:32.748 ip_frag: explicitly disabled via build config 00:02:32.748 jobstats: explicitly disabled via build config 00:02:32.748 latencystats: explicitly disabled via build config 00:02:32.748 lpm: explicitly disabled via build config 00:02:32.748 member: explicitly disabled via build config 00:02:32.748 pcapng: explicitly disabled via build config 00:02:32.748 rawdev: explicitly disabled via build config 00:02:32.748 regexdev: explicitly disabled via build config 00:02:32.748 mldev: explicitly disabled via build config 00:02:32.748 rib: explicitly disabled via build config 00:02:32.748 sched: explicitly disabled via build config 00:02:32.748 stack: explicitly disabled via build config 00:02:32.748 ipsec: explicitly disabled via build config 00:02:32.748 pdcp: explicitly disabled via build config 00:02:32.748 fib: explicitly disabled via build config 00:02:32.748 port: explicitly disabled via build config 00:02:32.748 pdump: explicitly disabled via build config 00:02:32.748 table: explicitly disabled via build config 00:02:32.748 pipeline: explicitly disabled via build config 00:02:32.748 graph: explicitly disabled via build config 00:02:32.748 node: explicitly disabled via build config 00:02:32.748 00:02:32.748 drivers: 00:02:32.749 common/cpt: not in enabled drivers build config 00:02:32.749 common/dpaax: not in enabled drivers build config 00:02:32.749 common/iavf: not in enabled drivers build config 00:02:32.749 common/idpf: not in enabled drivers build config 00:02:32.749 common/ionic: not in enabled drivers build config 00:02:32.749 common/mvep: not in enabled drivers build config 00:02:32.749 common/octeontx: not in enabled drivers build config 00:02:32.749 bus/auxiliary: not in enabled drivers build config 00:02:32.749 bus/cdx: not in enabled drivers build config 00:02:32.749 bus/dpaa: not in enabled drivers build config 00:02:32.749 bus/fslmc: not in enabled drivers build config 00:02:32.749 bus/ifpga: not in enabled drivers build config 00:02:32.749 bus/platform: not in enabled drivers build config 00:02:32.749 bus/uacce: not in enabled drivers build config 00:02:32.749 bus/vmbus: not in enabled drivers build config 00:02:32.749 common/cnxk: not in enabled drivers build config 00:02:32.749 common/mlx5: not in enabled drivers build config 00:02:32.749 common/nfp: not in enabled drivers build config 00:02:32.749 common/nitrox: not in enabled drivers build config 00:02:32.749 common/qat: not in enabled drivers build config 00:02:32.749 common/sfc_efx: not in enabled drivers build config 00:02:32.749 mempool/bucket: not in enabled drivers build config 00:02:32.749 mempool/cnxk: not in enabled drivers build config 00:02:32.749 mempool/dpaa: not in enabled drivers build config 00:02:32.749 mempool/dpaa2: not in enabled drivers build config 00:02:32.749 mempool/octeontx: not in enabled drivers build config 00:02:32.749 mempool/stack: not in enabled drivers build config 00:02:32.749 dma/cnxk: not in enabled drivers build config 00:02:32.749 dma/dpaa: not in enabled drivers build config 00:02:32.749 dma/dpaa2: not in enabled drivers build config 00:02:32.749 dma/hisilicon: not in enabled drivers build config 00:02:32.749 dma/idxd: not in enabled drivers build config 00:02:32.749 dma/ioat: not in enabled drivers build config 00:02:32.749 dma/skeleton: not in enabled drivers build config 00:02:32.749 net/af_packet: not in enabled drivers build config 00:02:32.749 net/af_xdp: not in enabled drivers build config 00:02:32.749 net/ark: not in enabled drivers build config 00:02:32.749 net/atlantic: not in enabled drivers build config 00:02:32.749 net/avp: not in enabled drivers build config 00:02:32.749 net/axgbe: not in enabled drivers build config 00:02:32.749 net/bnx2x: not in enabled drivers build config 00:02:32.749 net/bnxt: not in enabled drivers build config 00:02:32.749 net/bonding: not in enabled drivers build config 00:02:32.749 net/cnxk: not in enabled drivers build config 00:02:32.749 net/cpfl: not in enabled drivers build config 00:02:32.749 net/cxgbe: not in enabled drivers build config 00:02:32.749 net/dpaa: not in enabled drivers build config 00:02:32.749 net/dpaa2: not in enabled drivers build config 00:02:32.749 net/e1000: not in enabled drivers build config 00:02:32.749 net/ena: not in enabled drivers build config 00:02:32.749 net/enetc: not in enabled drivers build config 00:02:32.749 net/enetfec: not in enabled drivers build config 00:02:32.749 net/enic: not in enabled drivers build config 00:02:32.749 net/failsafe: not in enabled drivers build config 00:02:32.749 net/fm10k: not in enabled drivers build config 00:02:32.749 net/gve: not in enabled drivers build config 00:02:32.749 net/hinic: not in enabled drivers build config 00:02:32.749 net/hns3: not in enabled drivers build config 00:02:32.749 net/i40e: not in enabled drivers build config 00:02:32.749 net/iavf: not in enabled drivers build config 00:02:32.749 net/ice: not in enabled drivers build config 00:02:32.749 net/idpf: not in enabled drivers build config 00:02:32.749 net/igc: not in enabled drivers build config 00:02:32.749 net/ionic: not in enabled drivers build config 00:02:32.749 net/ipn3ke: not in enabled drivers build config 00:02:32.749 net/ixgbe: not in enabled drivers build config 00:02:32.749 net/mana: not in enabled drivers build config 00:02:32.749 net/memif: not in enabled drivers build config 00:02:32.749 net/mlx4: not in enabled drivers build config 00:02:32.749 net/mlx5: not in enabled drivers build config 00:02:32.749 net/mvneta: not in enabled drivers build config 00:02:32.749 net/mvpp2: not in enabled drivers build config 00:02:32.749 net/netvsc: not in enabled drivers build config 00:02:32.749 net/nfb: not in enabled drivers build config 00:02:32.749 net/nfp: not in enabled drivers build config 00:02:32.749 net/ngbe: not in enabled drivers build config 00:02:32.749 net/null: not in enabled drivers build config 00:02:32.749 net/octeontx: not in enabled drivers build config 00:02:32.749 net/octeon_ep: not in enabled drivers build config 00:02:32.749 net/pcap: not in enabled drivers build config 00:02:32.749 net/pfe: not in enabled drivers build config 00:02:32.749 net/qede: not in enabled drivers build config 00:02:32.749 net/ring: not in enabled drivers build config 00:02:32.749 net/sfc: not in enabled drivers build config 00:02:32.749 net/softnic: not in enabled drivers build config 00:02:32.749 net/tap: not in enabled drivers build config 00:02:32.749 net/thunderx: not in enabled drivers build config 00:02:32.749 net/txgbe: not in enabled drivers build config 00:02:32.749 net/vdev_netvsc: not in enabled drivers build config 00:02:32.749 net/vhost: not in enabled drivers build config 00:02:32.749 net/virtio: not in enabled drivers build config 00:02:32.749 net/vmxnet3: not in enabled drivers build config 00:02:32.749 raw/*: missing internal dependency, "rawdev" 00:02:32.749 crypto/armv8: not in enabled drivers build config 00:02:32.749 crypto/bcmfs: not in enabled drivers build config 00:02:32.749 crypto/caam_jr: not in enabled drivers build config 00:02:32.749 crypto/ccp: not in enabled drivers build config 00:02:32.749 crypto/cnxk: not in enabled drivers build config 00:02:32.749 crypto/dpaa_sec: not in enabled drivers build config 00:02:32.749 crypto/dpaa2_sec: not in enabled drivers build config 00:02:32.749 crypto/ipsec_mb: not in enabled drivers build config 00:02:32.749 crypto/mlx5: not in enabled drivers build config 00:02:32.749 crypto/mvsam: not in enabled drivers build config 00:02:32.749 crypto/nitrox: not in enabled drivers build config 00:02:32.749 crypto/null: not in enabled drivers build config 00:02:32.749 crypto/octeontx: not in enabled drivers build config 00:02:32.749 crypto/openssl: not in enabled drivers build config 00:02:32.749 crypto/scheduler: not in enabled drivers build config 00:02:32.749 crypto/uadk: not in enabled drivers build config 00:02:32.749 crypto/virtio: not in enabled drivers build config 00:02:32.749 compress/isal: not in enabled drivers build config 00:02:32.749 compress/mlx5: not in enabled drivers build config 00:02:32.749 compress/nitrox: not in enabled drivers build config 00:02:32.749 compress/octeontx: not in enabled drivers build config 00:02:32.749 compress/zlib: not in enabled drivers build config 00:02:32.749 regex/*: missing internal dependency, "regexdev" 00:02:32.749 ml/*: missing internal dependency, "mldev" 00:02:32.749 vdpa/ifc: not in enabled drivers build config 00:02:32.749 vdpa/mlx5: not in enabled drivers build config 00:02:32.749 vdpa/nfp: not in enabled drivers build config 00:02:32.749 vdpa/sfc: not in enabled drivers build config 00:02:32.749 event/*: missing internal dependency, "eventdev" 00:02:32.749 baseband/*: missing internal dependency, "bbdev" 00:02:32.749 gpu/*: missing internal dependency, "gpudev" 00:02:32.749 00:02:32.749 00:02:32.749 Build targets in project: 85 00:02:32.749 00:02:32.749 DPDK 24.03.0 00:02:32.749 00:02:32.749 User defined options 00:02:32.749 buildtype : debug 00:02:32.749 default_library : shared 00:02:32.749 libdir : lib 00:02:32.749 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:32.749 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:32.749 c_link_args : 00:02:32.749 cpu_instruction_set: native 00:02:32.749 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:32.749 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:32.749 enable_docs : false 00:02:32.749 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:32.749 enable_kmods : false 00:02:32.749 max_lcores : 128 00:02:32.749 tests : false 00:02:32.749 00:02:32.749 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:33.020 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:33.020 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:33.020 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:33.020 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:33.286 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:33.286 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:33.286 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:33.286 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:33.286 [8/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:33.286 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:33.286 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:33.286 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:33.286 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:33.286 [13/268] Linking static target lib/librte_kvargs.a 00:02:33.286 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:33.286 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:33.286 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:33.286 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:33.286 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:33.286 [19/268] Linking static target lib/librte_log.a 00:02:33.286 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:33.286 [21/268] Linking static target lib/librte_pci.a 00:02:33.547 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:33.547 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:33.547 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:33.547 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:33.547 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:33.547 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:33.547 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:33.547 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:33.547 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:33.547 [31/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:33.547 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:33.547 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:33.547 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:33.806 [35/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:33.806 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:33.806 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:33.806 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:33.806 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:33.806 [40/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:33.806 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:33.806 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:33.806 [43/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:33.806 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:33.806 [45/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:33.806 [46/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:33.806 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:33.806 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:33.806 [49/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:33.806 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:33.806 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:33.806 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:33.806 [53/268] Linking static target lib/librte_ring.a 00:02:33.806 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:33.806 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:33.806 [56/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:33.806 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:33.806 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:33.806 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:33.806 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:33.806 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:33.806 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:33.806 [63/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:33.806 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:33.806 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:33.806 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:33.806 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:33.806 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:33.806 [69/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.806 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:33.806 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:33.806 [72/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:33.806 [73/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:33.806 [74/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:33.806 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:33.806 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:33.806 [77/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:33.806 [78/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:33.806 [79/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:33.806 [80/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:33.806 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:33.806 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:33.806 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:33.806 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:33.806 [85/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:33.806 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:33.806 [87/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:33.806 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:33.806 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:33.806 [90/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:33.806 [91/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:33.806 [92/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:33.806 [93/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:33.806 [94/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:33.806 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:33.806 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:33.806 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:33.806 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:33.806 [99/268] Linking static target lib/librte_meter.a 00:02:33.806 [100/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.806 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:33.806 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:33.806 [103/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:33.806 [104/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:33.806 [105/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:33.806 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:33.806 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:33.806 [108/268] Linking static target lib/librte_telemetry.a 00:02:33.806 [109/268] Linking static target lib/librte_mempool.a 00:02:34.065 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:34.065 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:34.065 [112/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:34.065 [113/268] Linking static target lib/librte_net.a 00:02:34.065 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:34.065 [115/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:34.065 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:34.065 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:34.065 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:34.065 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:34.065 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:34.065 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:34.065 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:34.065 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:34.065 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:34.065 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:34.065 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:34.065 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:34.065 [128/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:34.065 [129/268] Linking static target lib/librte_rcu.a 00:02:34.065 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:34.065 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:34.065 [132/268] Linking static target lib/librte_cmdline.a 00:02:34.065 [133/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:34.065 [134/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:34.065 [135/268] Linking static target lib/librte_mbuf.a 00:02:34.065 [136/268] Linking static target lib/librte_eal.a 00:02:34.065 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.065 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.065 [139/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:34.065 [140/268] Linking target lib/librte_log.so.24.1 00:02:34.065 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:34.065 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:34.065 [143/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:34.065 [144/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:34.065 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:34.065 [146/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.065 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:34.324 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:34.324 [149/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:34.324 [150/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:34.324 [151/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.324 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:34.324 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:34.324 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:34.324 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:34.324 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:34.324 [157/268] Linking static target lib/librte_timer.a 00:02:34.324 [158/268] Linking static target lib/librte_dmadev.a 00:02:34.324 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:34.324 [160/268] Linking static target lib/librte_compressdev.a 00:02:34.324 [161/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:34.324 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:34.324 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:34.324 [164/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:34.324 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:34.324 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:34.324 [167/268] Linking target lib/librte_kvargs.so.24.1 00:02:34.324 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:34.324 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:34.324 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:34.324 [171/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.324 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:34.324 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:34.324 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:34.324 [175/268] Linking static target lib/librte_power.a 00:02:34.324 [176/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.324 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:34.324 [178/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:34.324 [179/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:34.324 [180/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:34.324 [181/268] Linking static target lib/librte_reorder.a 00:02:34.324 [182/268] Linking target lib/librte_telemetry.so.24.1 00:02:34.324 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:34.324 [184/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:34.324 [185/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:34.324 [186/268] Linking static target lib/librte_security.a 00:02:34.324 [187/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:34.583 [188/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:34.583 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:34.583 [190/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:34.583 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:34.583 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:34.583 [193/268] Linking static target lib/librte_hash.a 00:02:34.583 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:34.583 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:34.583 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:34.583 [197/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:34.583 [198/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:34.583 [199/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:34.583 [200/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:34.583 [201/268] Linking static target drivers/librte_mempool_ring.a 00:02:34.583 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:34.583 [203/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:34.583 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:34.583 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:34.583 [206/268] Linking static target drivers/librte_bus_vdev.a 00:02:34.583 [207/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.583 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:34.842 [209/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.842 [210/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:34.842 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:34.842 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:34.842 [213/268] Linking static target lib/librte_cryptodev.a 00:02:34.842 [214/268] Linking static target drivers/librte_bus_pci.a 00:02:34.842 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.842 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.842 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.842 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.101 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.101 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:35.101 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.101 [222/268] Linking static target lib/librte_ethdev.a 00:02:35.101 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.101 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.360 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.360 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.618 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.553 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.553 [229/268] Linking static target lib/librte_vhost.a 00:02:36.553 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.458 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.726 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.294 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.294 [234/268] Linking target lib/librte_eal.so.24.1 00:02:44.294 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:44.294 [236/268] Linking target lib/librte_ring.so.24.1 00:02:44.294 [237/268] Linking target lib/librte_timer.so.24.1 00:02:44.294 [238/268] Linking target lib/librte_meter.so.24.1 00:02:44.294 [239/268] Linking target lib/librte_pci.so.24.1 00:02:44.294 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:44.294 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:44.552 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:44.552 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:44.552 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:44.552 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:44.552 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:44.552 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:44.552 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:44.552 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:44.811 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:44.811 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:44.811 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:44.811 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:44.811 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:44.811 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:44.811 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:44.811 [257/268] Linking target lib/librte_net.so.24.1 00:02:44.811 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:45.069 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:45.069 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:45.069 [261/268] Linking target lib/librte_hash.so.24.1 00:02:45.069 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:45.069 [263/268] Linking target lib/librte_security.so.24.1 00:02:45.069 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:45.328 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:45.328 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:45.328 [267/268] Linking target lib/librte_power.so.24.1 00:02:45.328 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:45.328 INFO: autodetecting backend as ninja 00:02:45.328 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:55.307 CC lib/ut/ut.o 00:02:55.307 CC lib/log/log.o 00:02:55.307 CC lib/ut_mock/mock.o 00:02:55.307 CC lib/log/log_flags.o 00:02:55.307 CC lib/log/log_deprecated.o 00:02:55.565 LIB libspdk_ut.a 00:02:55.565 LIB libspdk_ut_mock.a 00:02:55.565 LIB libspdk_log.a 00:02:55.565 SO libspdk_ut.so.2.0 00:02:55.565 SO libspdk_ut_mock.so.6.0 00:02:55.565 SO libspdk_log.so.7.1 00:02:55.565 SYMLINK libspdk_ut.so 00:02:55.565 SYMLINK libspdk_ut_mock.so 00:02:55.853 SYMLINK libspdk_log.so 00:02:56.179 CC lib/dma/dma.o 00:02:56.179 CC lib/util/base64.o 00:02:56.179 CC lib/util/bit_array.o 00:02:56.179 CXX lib/trace_parser/trace.o 00:02:56.179 CC lib/util/cpuset.o 00:02:56.179 CC lib/util/crc16.o 00:02:56.179 CC lib/ioat/ioat.o 00:02:56.179 CC lib/util/crc32.o 00:02:56.179 CC lib/util/crc32c.o 00:02:56.179 CC lib/util/crc32_ieee.o 00:02:56.179 CC lib/util/crc64.o 00:02:56.179 CC lib/util/dif.o 00:02:56.179 CC lib/util/fd.o 00:02:56.179 CC lib/util/fd_group.o 00:02:56.179 CC lib/util/file.o 00:02:56.179 CC lib/util/hexlify.o 00:02:56.179 CC lib/util/iov.o 00:02:56.179 CC lib/util/math.o 00:02:56.179 CC lib/util/net.o 00:02:56.179 CC lib/util/pipe.o 00:02:56.179 CC lib/util/strerror_tls.o 00:02:56.179 CC lib/util/string.o 00:02:56.179 CC lib/util/uuid.o 00:02:56.179 CC lib/util/xor.o 00:02:56.179 CC lib/util/zipf.o 00:02:56.179 CC lib/util/md5.o 00:02:56.179 CC lib/vfio_user/host/vfio_user.o 00:02:56.179 CC lib/vfio_user/host/vfio_user_pci.o 00:02:56.179 LIB libspdk_dma.a 00:02:56.179 SO libspdk_dma.so.5.0 00:02:56.466 LIB libspdk_ioat.a 00:02:56.466 SYMLINK libspdk_dma.so 00:02:56.466 SO libspdk_ioat.so.7.0 00:02:56.466 SYMLINK libspdk_ioat.so 00:02:56.466 LIB libspdk_vfio_user.a 00:02:56.466 SO libspdk_vfio_user.so.5.0 00:02:56.466 LIB libspdk_util.a 00:02:56.466 SYMLINK libspdk_vfio_user.so 00:02:56.466 SO libspdk_util.so.10.1 00:02:56.724 SYMLINK libspdk_util.so 00:02:56.724 LIB libspdk_trace_parser.a 00:02:56.724 SO libspdk_trace_parser.so.6.0 00:02:56.982 SYMLINK libspdk_trace_parser.so 00:02:56.982 CC lib/rdma_utils/rdma_utils.o 00:02:56.982 CC lib/json/json_parse.o 00:02:56.982 CC lib/env_dpdk/env.o 00:02:56.983 CC lib/json/json_util.o 00:02:56.983 CC lib/env_dpdk/memory.o 00:02:56.983 CC lib/vmd/vmd.o 00:02:56.983 CC lib/json/json_write.o 00:02:56.983 CC lib/env_dpdk/pci.o 00:02:56.983 CC lib/vmd/led.o 00:02:56.983 CC lib/idxd/idxd.o 00:02:56.983 CC lib/env_dpdk/init.o 00:02:56.983 CC lib/idxd/idxd_user.o 00:02:56.983 CC lib/env_dpdk/threads.o 00:02:56.983 CC lib/idxd/idxd_kernel.o 00:02:56.983 CC lib/env_dpdk/pci_ioat.o 00:02:56.983 CC lib/conf/conf.o 00:02:56.983 CC lib/env_dpdk/pci_virtio.o 00:02:56.983 CC lib/env_dpdk/pci_vmd.o 00:02:56.983 CC lib/env_dpdk/pci_idxd.o 00:02:56.983 CC lib/env_dpdk/pci_event.o 00:02:56.983 CC lib/env_dpdk/sigbus_handler.o 00:02:56.983 CC lib/env_dpdk/pci_dpdk.o 00:02:56.983 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:56.983 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:57.241 LIB libspdk_rdma_utils.a 00:02:57.241 LIB libspdk_conf.a 00:02:57.241 SO libspdk_rdma_utils.so.1.0 00:02:57.241 SO libspdk_conf.so.6.0 00:02:57.241 LIB libspdk_json.a 00:02:57.241 SYMLINK libspdk_rdma_utils.so 00:02:57.241 SO libspdk_json.so.6.0 00:02:57.241 SYMLINK libspdk_conf.so 00:02:57.499 SYMLINK libspdk_json.so 00:02:57.499 LIB libspdk_idxd.a 00:02:57.499 SO libspdk_idxd.so.12.1 00:02:57.499 SYMLINK libspdk_idxd.so 00:02:57.499 LIB libspdk_vmd.a 00:02:57.757 CC lib/rdma_provider/common.o 00:02:57.757 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:57.757 SO libspdk_vmd.so.6.0 00:02:57.757 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.757 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.757 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.757 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:57.757 SYMLINK libspdk_vmd.so 00:02:57.757 LIB libspdk_rdma_provider.a 00:02:57.757 SO libspdk_rdma_provider.so.7.0 00:02:58.015 LIB libspdk_jsonrpc.a 00:02:58.015 SYMLINK libspdk_rdma_provider.so 00:02:58.015 SO libspdk_jsonrpc.so.6.0 00:02:58.015 SYMLINK libspdk_jsonrpc.so 00:02:58.015 LIB libspdk_env_dpdk.a 00:02:58.015 SO libspdk_env_dpdk.so.15.1 00:02:58.274 SYMLINK libspdk_env_dpdk.so 00:02:58.274 CC lib/rpc/rpc.o 00:02:58.533 LIB libspdk_rpc.a 00:02:58.533 SO libspdk_rpc.so.6.0 00:02:58.533 SYMLINK libspdk_rpc.so 00:02:58.792 CC lib/keyring/keyring.o 00:02:58.792 CC lib/keyring/keyring_rpc.o 00:02:58.792 CC lib/notify/notify.o 00:02:58.792 CC lib/notify/notify_rpc.o 00:02:58.792 CC lib/trace/trace.o 00:02:58.792 CC lib/trace/trace_flags.o 00:02:58.792 CC lib/trace/trace_rpc.o 00:02:59.049 LIB libspdk_notify.a 00:02:59.049 SO libspdk_notify.so.6.0 00:02:59.049 LIB libspdk_keyring.a 00:02:59.049 LIB libspdk_trace.a 00:02:59.049 SO libspdk_keyring.so.2.0 00:02:59.049 SYMLINK libspdk_notify.so 00:02:59.049 SO libspdk_trace.so.11.0 00:02:59.308 SYMLINK libspdk_keyring.so 00:02:59.308 SYMLINK libspdk_trace.so 00:02:59.566 CC lib/sock/sock.o 00:02:59.566 CC lib/sock/sock_rpc.o 00:02:59.566 CC lib/thread/thread.o 00:02:59.566 CC lib/thread/iobuf.o 00:02:59.825 LIB libspdk_sock.a 00:02:59.825 SO libspdk_sock.so.10.0 00:03:00.083 SYMLINK libspdk_sock.so 00:03:00.341 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:00.341 CC lib/nvme/nvme_ctrlr.o 00:03:00.341 CC lib/nvme/nvme_fabric.o 00:03:00.341 CC lib/nvme/nvme_ns_cmd.o 00:03:00.341 CC lib/nvme/nvme_ns.o 00:03:00.341 CC lib/nvme/nvme_pcie_common.o 00:03:00.341 CC lib/nvme/nvme_pcie.o 00:03:00.341 CC lib/nvme/nvme_qpair.o 00:03:00.341 CC lib/nvme/nvme.o 00:03:00.341 CC lib/nvme/nvme_quirks.o 00:03:00.341 CC lib/nvme/nvme_transport.o 00:03:00.341 CC lib/nvme/nvme_discovery.o 00:03:00.341 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:00.341 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:00.341 CC lib/nvme/nvme_tcp.o 00:03:00.341 CC lib/nvme/nvme_opal.o 00:03:00.341 CC lib/nvme/nvme_io_msg.o 00:03:00.341 CC lib/nvme/nvme_poll_group.o 00:03:00.341 CC lib/nvme/nvme_zns.o 00:03:00.341 CC lib/nvme/nvme_stubs.o 00:03:00.341 CC lib/nvme/nvme_auth.o 00:03:00.341 CC lib/nvme/nvme_cuse.o 00:03:00.341 CC lib/nvme/nvme_vfio_user.o 00:03:00.341 CC lib/nvme/nvme_rdma.o 00:03:00.600 LIB libspdk_thread.a 00:03:00.600 SO libspdk_thread.so.11.0 00:03:00.858 SYMLINK libspdk_thread.so 00:03:01.116 CC lib/accel/accel_rpc.o 00:03:01.116 CC lib/accel/accel.o 00:03:01.116 CC lib/accel/accel_sw.o 00:03:01.116 CC lib/init/json_config.o 00:03:01.116 CC lib/init/subsystem.o 00:03:01.116 CC lib/init/subsystem_rpc.o 00:03:01.116 CC lib/init/rpc.o 00:03:01.116 CC lib/virtio/virtio.o 00:03:01.116 CC lib/virtio/virtio_vhost_user.o 00:03:01.116 CC lib/virtio/virtio_vfio_user.o 00:03:01.116 CC lib/vfu_tgt/tgt_endpoint.o 00:03:01.116 CC lib/virtio/virtio_pci.o 00:03:01.116 CC lib/vfu_tgt/tgt_rpc.o 00:03:01.116 CC lib/blob/blobstore.o 00:03:01.116 CC lib/blob/request.o 00:03:01.116 CC lib/blob/zeroes.o 00:03:01.116 CC lib/blob/blob_bs_dev.o 00:03:01.116 CC lib/fsdev/fsdev.o 00:03:01.116 CC lib/fsdev/fsdev_io.o 00:03:01.116 CC lib/fsdev/fsdev_rpc.o 00:03:01.374 LIB libspdk_init.a 00:03:01.374 SO libspdk_init.so.6.0 00:03:01.374 LIB libspdk_virtio.a 00:03:01.374 LIB libspdk_vfu_tgt.a 00:03:01.374 SO libspdk_virtio.so.7.0 00:03:01.374 SYMLINK libspdk_init.so 00:03:01.374 SO libspdk_vfu_tgt.so.3.0 00:03:01.374 SYMLINK libspdk_virtio.so 00:03:01.374 SYMLINK libspdk_vfu_tgt.so 00:03:01.633 LIB libspdk_fsdev.a 00:03:01.633 SO libspdk_fsdev.so.2.0 00:03:01.633 SYMLINK libspdk_fsdev.so 00:03:01.633 CC lib/event/app.o 00:03:01.633 CC lib/event/reactor.o 00:03:01.633 CC lib/event/log_rpc.o 00:03:01.633 CC lib/event/app_rpc.o 00:03:01.633 CC lib/event/scheduler_static.o 00:03:01.892 LIB libspdk_accel.a 00:03:01.892 SO libspdk_accel.so.16.0 00:03:01.892 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:01.892 SYMLINK libspdk_accel.so 00:03:01.892 LIB libspdk_nvme.a 00:03:02.150 LIB libspdk_event.a 00:03:02.150 SO libspdk_event.so.14.0 00:03:02.150 SO libspdk_nvme.so.15.0 00:03:02.150 SYMLINK libspdk_event.so 00:03:02.150 CC lib/bdev/bdev.o 00:03:02.150 CC lib/bdev/bdev_rpc.o 00:03:02.150 CC lib/bdev/bdev_zone.o 00:03:02.150 CC lib/bdev/part.o 00:03:02.150 CC lib/bdev/scsi_nvme.o 00:03:02.408 SYMLINK libspdk_nvme.so 00:03:02.408 LIB libspdk_fuse_dispatcher.a 00:03:02.408 SO libspdk_fuse_dispatcher.so.1.0 00:03:02.667 SYMLINK libspdk_fuse_dispatcher.so 00:03:03.234 LIB libspdk_blob.a 00:03:03.235 SO libspdk_blob.so.12.0 00:03:03.235 SYMLINK libspdk_blob.so 00:03:03.494 CC lib/lvol/lvol.o 00:03:03.494 CC lib/blobfs/blobfs.o 00:03:03.494 CC lib/blobfs/tree.o 00:03:04.061 LIB libspdk_bdev.a 00:03:04.061 SO libspdk_bdev.so.17.0 00:03:04.320 LIB libspdk_blobfs.a 00:03:04.320 SO libspdk_blobfs.so.11.0 00:03:04.320 SYMLINK libspdk_bdev.so 00:03:04.320 LIB libspdk_lvol.a 00:03:04.320 SO libspdk_lvol.so.11.0 00:03:04.320 SYMLINK libspdk_blobfs.so 00:03:04.320 SYMLINK libspdk_lvol.so 00:03:04.580 CC lib/nvmf/ctrlr.o 00:03:04.580 CC lib/nvmf/ctrlr_discovery.o 00:03:04.580 CC lib/nvmf/ctrlr_bdev.o 00:03:04.580 CC lib/nvmf/subsystem.o 00:03:04.580 CC lib/ftl/ftl_init.o 00:03:04.580 CC lib/nvmf/nvmf.o 00:03:04.580 CC lib/ftl/ftl_core.o 00:03:04.580 CC lib/nvmf/nvmf_rpc.o 00:03:04.580 CC lib/nvmf/transport.o 00:03:04.580 CC lib/nvmf/tcp.o 00:03:04.580 CC lib/ftl/ftl_layout.o 00:03:04.580 CC lib/ublk/ublk_rpc.o 00:03:04.580 CC lib/scsi/dev.o 00:03:04.580 CC lib/ublk/ublk.o 00:03:04.580 CC lib/ftl/ftl_debug.o 00:03:04.580 CC lib/nvmf/stubs.o 00:03:04.580 CC lib/scsi/lun.o 00:03:04.580 CC lib/nvmf/mdns_server.o 00:03:04.580 CC lib/ftl/ftl_sb.o 00:03:04.580 CC lib/scsi/port.o 00:03:04.580 CC lib/ftl/ftl_io.o 00:03:04.580 CC lib/scsi/scsi.o 00:03:04.580 CC lib/nbd/nbd_rpc.o 00:03:04.580 CC lib/nbd/nbd.o 00:03:04.580 CC lib/scsi/scsi_bdev.o 00:03:04.580 CC lib/ftl/ftl_l2p.o 00:03:04.580 CC lib/nvmf/vfio_user.o 00:03:04.580 CC lib/nvmf/rdma.o 00:03:04.580 CC lib/nvmf/auth.o 00:03:04.580 CC lib/ftl/ftl_l2p_flat.o 00:03:04.580 CC lib/scsi/scsi_pr.o 00:03:04.580 CC lib/scsi/scsi_rpc.o 00:03:04.580 CC lib/ftl/ftl_nv_cache.o 00:03:04.580 CC lib/scsi/task.o 00:03:04.580 CC lib/ftl/ftl_band.o 00:03:04.580 CC lib/ftl/ftl_band_ops.o 00:03:04.580 CC lib/ftl/ftl_rq.o 00:03:04.580 CC lib/ftl/ftl_writer.o 00:03:04.580 CC lib/ftl/ftl_reloc.o 00:03:04.580 CC lib/ftl/ftl_l2p_cache.o 00:03:04.580 CC lib/ftl/ftl_p2l.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:04.580 CC lib/ftl/ftl_p2l_log.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:04.580 CC lib/ftl/utils/ftl_conf.o 00:03:04.580 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:04.580 CC lib/ftl/utils/ftl_md.o 00:03:04.580 CC lib/ftl/utils/ftl_mempool.o 00:03:04.580 CC lib/ftl/utils/ftl_bitmap.o 00:03:04.580 CC lib/ftl/utils/ftl_property.o 00:03:04.580 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:04.580 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:04.580 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:04.580 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:04.580 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:04.580 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:04.580 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:04.580 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:04.580 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:04.580 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:04.580 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:04.580 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:04.580 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:04.580 CC lib/ftl/base/ftl_base_dev.o 00:03:04.580 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.580 CC lib/ftl/ftl_trace.o 00:03:05.150 LIB libspdk_nbd.a 00:03:05.150 SO libspdk_nbd.so.7.0 00:03:05.150 LIB libspdk_scsi.a 00:03:05.150 LIB libspdk_ublk.a 00:03:05.150 SYMLINK libspdk_nbd.so 00:03:05.150 SO libspdk_scsi.so.9.0 00:03:05.150 SO libspdk_ublk.so.3.0 00:03:05.408 SYMLINK libspdk_scsi.so 00:03:05.408 SYMLINK libspdk_ublk.so 00:03:05.667 CC lib/vhost/vhost_rpc.o 00:03:05.667 CC lib/vhost/vhost.o 00:03:05.667 CC lib/vhost/vhost_scsi.o 00:03:05.667 CC lib/vhost/vhost_blk.o 00:03:05.667 CC lib/vhost/rte_vhost_user.o 00:03:05.667 CC lib/iscsi/conn.o 00:03:05.667 CC lib/iscsi/init_grp.o 00:03:05.667 CC lib/iscsi/iscsi.o 00:03:05.667 CC lib/iscsi/param.o 00:03:05.667 CC lib/iscsi/portal_grp.o 00:03:05.667 CC lib/iscsi/iscsi_subsystem.o 00:03:05.667 CC lib/iscsi/tgt_node.o 00:03:05.667 CC lib/iscsi/iscsi_rpc.o 00:03:05.667 CC lib/iscsi/task.o 00:03:05.667 LIB libspdk_ftl.a 00:03:05.924 SO libspdk_ftl.so.9.0 00:03:05.924 SYMLINK libspdk_ftl.so 00:03:06.182 LIB libspdk_nvmf.a 00:03:06.440 LIB libspdk_vhost.a 00:03:06.440 SO libspdk_nvmf.so.20.0 00:03:06.440 SO libspdk_vhost.so.8.0 00:03:06.440 SYMLINK libspdk_vhost.so 00:03:06.440 SYMLINK libspdk_nvmf.so 00:03:06.699 LIB libspdk_iscsi.a 00:03:06.699 SO libspdk_iscsi.so.8.0 00:03:06.699 SYMLINK libspdk_iscsi.so 00:03:07.266 CC module/env_dpdk/env_dpdk_rpc.o 00:03:07.266 CC module/vfu_device/vfu_virtio.o 00:03:07.266 CC module/vfu_device/vfu_virtio_blk.o 00:03:07.266 CC module/vfu_device/vfu_virtio_scsi.o 00:03:07.266 CC module/vfu_device/vfu_virtio_rpc.o 00:03:07.266 CC module/vfu_device/vfu_virtio_fs.o 00:03:07.525 CC module/accel/ioat/accel_ioat.o 00:03:07.525 CC module/accel/ioat/accel_ioat_rpc.o 00:03:07.525 CC module/accel/error/accel_error.o 00:03:07.525 CC module/accel/error/accel_error_rpc.o 00:03:07.525 LIB libspdk_env_dpdk_rpc.a 00:03:07.525 CC module/keyring/linux/keyring_rpc.o 00:03:07.525 CC module/keyring/linux/keyring.o 00:03:07.525 CC module/accel/dsa/accel_dsa_rpc.o 00:03:07.525 CC module/accel/dsa/accel_dsa.o 00:03:07.525 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:07.525 CC module/keyring/file/keyring.o 00:03:07.525 CC module/scheduler/gscheduler/gscheduler.o 00:03:07.525 CC module/accel/iaa/accel_iaa.o 00:03:07.525 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:07.525 CC module/keyring/file/keyring_rpc.o 00:03:07.525 CC module/accel/iaa/accel_iaa_rpc.o 00:03:07.525 CC module/fsdev/aio/fsdev_aio.o 00:03:07.525 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:07.525 CC module/fsdev/aio/linux_aio_mgr.o 00:03:07.525 CC module/blob/bdev/blob_bdev.o 00:03:07.525 CC module/sock/posix/posix.o 00:03:07.525 SO libspdk_env_dpdk_rpc.so.6.0 00:03:07.525 SYMLINK libspdk_env_dpdk_rpc.so 00:03:07.525 LIB libspdk_keyring_linux.a 00:03:07.525 LIB libspdk_scheduler_gscheduler.a 00:03:07.525 LIB libspdk_keyring_file.a 00:03:07.525 SO libspdk_keyring_linux.so.1.0 00:03:07.525 LIB libspdk_accel_ioat.a 00:03:07.525 LIB libspdk_scheduler_dpdk_governor.a 00:03:07.525 SO libspdk_scheduler_gscheduler.so.4.0 00:03:07.525 SO libspdk_keyring_file.so.2.0 00:03:07.525 LIB libspdk_accel_error.a 00:03:07.784 LIB libspdk_scheduler_dynamic.a 00:03:07.784 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:07.784 SO libspdk_accel_ioat.so.6.0 00:03:07.784 LIB libspdk_accel_iaa.a 00:03:07.785 SO libspdk_accel_error.so.2.0 00:03:07.785 SYMLINK libspdk_keyring_linux.so 00:03:07.785 SO libspdk_scheduler_dynamic.so.4.0 00:03:07.785 SYMLINK libspdk_scheduler_gscheduler.so 00:03:07.785 SYMLINK libspdk_keyring_file.so 00:03:07.785 SO libspdk_accel_iaa.so.3.0 00:03:07.785 LIB libspdk_accel_dsa.a 00:03:07.785 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:07.785 SYMLINK libspdk_accel_ioat.so 00:03:07.785 LIB libspdk_blob_bdev.a 00:03:07.785 SYMLINK libspdk_accel_error.so 00:03:07.785 SO libspdk_accel_dsa.so.5.0 00:03:07.785 SYMLINK libspdk_scheduler_dynamic.so 00:03:07.785 SO libspdk_blob_bdev.so.12.0 00:03:07.785 SYMLINK libspdk_accel_iaa.so 00:03:07.785 SYMLINK libspdk_accel_dsa.so 00:03:07.785 SYMLINK libspdk_blob_bdev.so 00:03:07.785 LIB libspdk_vfu_device.a 00:03:07.785 SO libspdk_vfu_device.so.3.0 00:03:08.044 SYMLINK libspdk_vfu_device.so 00:03:08.044 LIB libspdk_fsdev_aio.a 00:03:08.044 LIB libspdk_sock_posix.a 00:03:08.044 SO libspdk_fsdev_aio.so.1.0 00:03:08.044 SO libspdk_sock_posix.so.6.0 00:03:08.044 SYMLINK libspdk_fsdev_aio.so 00:03:08.044 SYMLINK libspdk_sock_posix.so 00:03:08.303 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:08.303 CC module/bdev/delay/vbdev_delay.o 00:03:08.303 CC module/bdev/malloc/bdev_malloc.o 00:03:08.303 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:08.303 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.303 CC module/bdev/error/vbdev_error.o 00:03:08.303 CC module/bdev/raid/bdev_raid_rpc.o 00:03:08.303 CC module/bdev/raid/bdev_raid.o 00:03:08.303 CC module/bdev/raid/bdev_raid_sb.o 00:03:08.303 CC module/bdev/raid/raid0.o 00:03:08.303 CC module/bdev/raid/raid1.o 00:03:08.303 CC module/bdev/raid/concat.o 00:03:08.303 CC module/blobfs/bdev/blobfs_bdev.o 00:03:08.303 CC module/bdev/aio/bdev_aio.o 00:03:08.303 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:08.303 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:08.303 CC module/bdev/aio/bdev_aio_rpc.o 00:03:08.303 CC module/bdev/lvol/vbdev_lvol.o 00:03:08.303 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:08.303 CC module/bdev/gpt/gpt.o 00:03:08.303 CC module/bdev/passthru/vbdev_passthru.o 00:03:08.303 CC module/bdev/gpt/vbdev_gpt.o 00:03:08.303 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:08.303 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:08.303 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.303 CC module/bdev/split/vbdev_split.o 00:03:08.303 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:08.303 CC module/bdev/split/vbdev_split_rpc.o 00:03:08.303 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.303 CC module/bdev/iscsi/bdev_iscsi.o 00:03:08.303 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:08.303 CC module/bdev/nvme/bdev_nvme.o 00:03:08.303 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:08.303 CC module/bdev/nvme/nvme_rpc.o 00:03:08.303 CC module/bdev/null/bdev_null.o 00:03:08.303 CC module/bdev/nvme/bdev_mdns_client.o 00:03:08.303 CC module/bdev/null/bdev_null_rpc.o 00:03:08.303 CC module/bdev/nvme/vbdev_opal.o 00:03:08.303 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:08.303 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:08.303 CC module/bdev/ftl/bdev_ftl.o 00:03:08.303 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:08.562 LIB libspdk_blobfs_bdev.a 00:03:08.562 LIB libspdk_bdev_error.a 00:03:08.562 SO libspdk_blobfs_bdev.so.6.0 00:03:08.562 LIB libspdk_bdev_gpt.a 00:03:08.562 LIB libspdk_bdev_split.a 00:03:08.562 LIB libspdk_bdev_passthru.a 00:03:08.562 SO libspdk_bdev_error.so.6.0 00:03:08.562 SO libspdk_bdev_gpt.so.6.0 00:03:08.562 LIB libspdk_bdev_null.a 00:03:08.562 SO libspdk_bdev_split.so.6.0 00:03:08.562 LIB libspdk_bdev_malloc.a 00:03:08.562 SO libspdk_bdev_passthru.so.6.0 00:03:08.562 LIB libspdk_bdev_aio.a 00:03:08.562 SYMLINK libspdk_blobfs_bdev.so 00:03:08.821 LIB libspdk_bdev_ftl.a 00:03:08.821 SO libspdk_bdev_null.so.6.0 00:03:08.821 LIB libspdk_bdev_delay.a 00:03:08.821 SO libspdk_bdev_malloc.so.6.0 00:03:08.821 SO libspdk_bdev_ftl.so.6.0 00:03:08.821 SYMLINK libspdk_bdev_error.so 00:03:08.821 SO libspdk_bdev_aio.so.6.0 00:03:08.821 SYMLINK libspdk_bdev_gpt.so 00:03:08.821 LIB libspdk_bdev_zone_block.a 00:03:08.821 SYMLINK libspdk_bdev_split.so 00:03:08.821 SO libspdk_bdev_delay.so.6.0 00:03:08.821 LIB libspdk_bdev_iscsi.a 00:03:08.821 SYMLINK libspdk_bdev_passthru.so 00:03:08.821 SO libspdk_bdev_zone_block.so.6.0 00:03:08.821 SYMLINK libspdk_bdev_null.so 00:03:08.821 SO libspdk_bdev_iscsi.so.6.0 00:03:08.821 SYMLINK libspdk_bdev_malloc.so 00:03:08.821 SYMLINK libspdk_bdev_aio.so 00:03:08.821 SYMLINK libspdk_bdev_ftl.so 00:03:08.821 SYMLINK libspdk_bdev_delay.so 00:03:08.821 SYMLINK libspdk_bdev_zone_block.so 00:03:08.821 LIB libspdk_bdev_lvol.a 00:03:08.821 SYMLINK libspdk_bdev_iscsi.so 00:03:08.821 SO libspdk_bdev_lvol.so.6.0 00:03:08.821 LIB libspdk_bdev_virtio.a 00:03:08.821 SYMLINK libspdk_bdev_lvol.so 00:03:08.821 SO libspdk_bdev_virtio.so.6.0 00:03:09.080 SYMLINK libspdk_bdev_virtio.so 00:03:09.080 LIB libspdk_bdev_raid.a 00:03:09.339 SO libspdk_bdev_raid.so.6.0 00:03:09.339 SYMLINK libspdk_bdev_raid.so 00:03:10.276 LIB libspdk_bdev_nvme.a 00:03:10.276 SO libspdk_bdev_nvme.so.7.1 00:03:10.276 SYMLINK libspdk_bdev_nvme.so 00:03:11.214 CC module/event/subsystems/vmd/vmd.o 00:03:11.214 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.214 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.214 CC module/event/subsystems/sock/sock.o 00:03:11.214 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.214 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.214 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.214 CC module/event/subsystems/keyring/keyring.o 00:03:11.214 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:11.214 CC module/event/subsystems/fsdev/fsdev.o 00:03:11.214 LIB libspdk_event_vfu_tgt.a 00:03:11.214 LIB libspdk_event_sock.a 00:03:11.214 LIB libspdk_event_vhost_blk.a 00:03:11.214 LIB libspdk_event_vmd.a 00:03:11.214 LIB libspdk_event_scheduler.a 00:03:11.214 LIB libspdk_event_keyring.a 00:03:11.214 LIB libspdk_event_fsdev.a 00:03:11.214 LIB libspdk_event_iobuf.a 00:03:11.214 SO libspdk_event_sock.so.5.0 00:03:11.214 SO libspdk_event_keyring.so.1.0 00:03:11.214 SO libspdk_event_vfu_tgt.so.3.0 00:03:11.214 SO libspdk_event_vhost_blk.so.3.0 00:03:11.214 SO libspdk_event_fsdev.so.1.0 00:03:11.214 SO libspdk_event_vmd.so.6.0 00:03:11.214 SO libspdk_event_scheduler.so.4.0 00:03:11.214 SO libspdk_event_iobuf.so.3.0 00:03:11.214 SYMLINK libspdk_event_sock.so 00:03:11.214 SYMLINK libspdk_event_keyring.so 00:03:11.214 SYMLINK libspdk_event_fsdev.so 00:03:11.214 SYMLINK libspdk_event_vfu_tgt.so 00:03:11.214 SYMLINK libspdk_event_vhost_blk.so 00:03:11.214 SYMLINK libspdk_event_scheduler.so 00:03:11.214 SYMLINK libspdk_event_vmd.so 00:03:11.214 SYMLINK libspdk_event_iobuf.so 00:03:11.473 CC module/event/subsystems/accel/accel.o 00:03:11.732 LIB libspdk_event_accel.a 00:03:11.732 SO libspdk_event_accel.so.6.0 00:03:11.732 SYMLINK libspdk_event_accel.so 00:03:12.300 CC module/event/subsystems/bdev/bdev.o 00:03:12.300 LIB libspdk_event_bdev.a 00:03:12.300 SO libspdk_event_bdev.so.6.0 00:03:12.300 SYMLINK libspdk_event_bdev.so 00:03:12.868 CC module/event/subsystems/ublk/ublk.o 00:03:12.868 CC module/event/subsystems/nbd/nbd.o 00:03:12.868 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.868 CC module/event/subsystems/scsi/scsi.o 00:03:12.868 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.868 LIB libspdk_event_ublk.a 00:03:12.868 LIB libspdk_event_nbd.a 00:03:12.868 LIB libspdk_event_scsi.a 00:03:12.868 SO libspdk_event_ublk.so.3.0 00:03:12.868 SO libspdk_event_nbd.so.6.0 00:03:12.868 SO libspdk_event_scsi.so.6.0 00:03:12.868 LIB libspdk_event_nvmf.a 00:03:12.868 SYMLINK libspdk_event_nbd.so 00:03:12.868 SYMLINK libspdk_event_ublk.so 00:03:12.868 SO libspdk_event_nvmf.so.6.0 00:03:12.868 SYMLINK libspdk_event_scsi.so 00:03:13.127 SYMLINK libspdk_event_nvmf.so 00:03:13.386 CC module/event/subsystems/iscsi/iscsi.o 00:03:13.386 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:13.386 LIB libspdk_event_vhost_scsi.a 00:03:13.386 LIB libspdk_event_iscsi.a 00:03:13.386 SO libspdk_event_vhost_scsi.so.3.0 00:03:13.386 SO libspdk_event_iscsi.so.6.0 00:03:13.386 SYMLINK libspdk_event_vhost_scsi.so 00:03:13.644 SYMLINK libspdk_event_iscsi.so 00:03:13.644 SO libspdk.so.6.0 00:03:13.644 SYMLINK libspdk.so 00:03:13.904 CC app/spdk_nvme_perf/perf.o 00:03:13.904 CXX app/trace/trace.o 00:03:14.168 CC app/spdk_nvme_identify/identify.o 00:03:14.168 CC test/rpc_client/rpc_client_test.o 00:03:14.168 CC app/spdk_nvme_discover/discovery_aer.o 00:03:14.168 TEST_HEADER include/spdk/accel.h 00:03:14.168 TEST_HEADER include/spdk/accel_module.h 00:03:14.168 TEST_HEADER include/spdk/assert.h 00:03:14.168 TEST_HEADER include/spdk/barrier.h 00:03:14.168 TEST_HEADER include/spdk/base64.h 00:03:14.168 TEST_HEADER include/spdk/bdev.h 00:03:14.168 CC app/trace_record/trace_record.o 00:03:14.168 TEST_HEADER include/spdk/bdev_module.h 00:03:14.168 TEST_HEADER include/spdk/bit_array.h 00:03:14.168 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.168 TEST_HEADER include/spdk/bit_pool.h 00:03:14.168 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.168 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.168 CC app/spdk_lspci/spdk_lspci.o 00:03:14.168 TEST_HEADER include/spdk/conf.h 00:03:14.168 TEST_HEADER include/spdk/blob.h 00:03:14.168 TEST_HEADER include/spdk/blobfs.h 00:03:14.168 TEST_HEADER include/spdk/config.h 00:03:14.168 CC app/spdk_top/spdk_top.o 00:03:14.168 TEST_HEADER include/spdk/cpuset.h 00:03:14.168 TEST_HEADER include/spdk/crc16.h 00:03:14.168 TEST_HEADER include/spdk/crc64.h 00:03:14.168 TEST_HEADER include/spdk/crc32.h 00:03:14.168 TEST_HEADER include/spdk/dma.h 00:03:14.168 TEST_HEADER include/spdk/dif.h 00:03:14.168 TEST_HEADER include/spdk/endian.h 00:03:14.168 TEST_HEADER include/spdk/fd_group.h 00:03:14.168 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.168 TEST_HEADER include/spdk/event.h 00:03:14.168 TEST_HEADER include/spdk/env.h 00:03:14.168 TEST_HEADER include/spdk/fd.h 00:03:14.168 TEST_HEADER include/spdk/fsdev.h 00:03:14.168 TEST_HEADER include/spdk/file.h 00:03:14.168 CC app/spdk_dd/spdk_dd.o 00:03:14.168 TEST_HEADER include/spdk/fsdev_module.h 00:03:14.168 TEST_HEADER include/spdk/hexlify.h 00:03:14.168 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.168 TEST_HEADER include/spdk/ftl.h 00:03:14.168 TEST_HEADER include/spdk/idxd.h 00:03:14.168 TEST_HEADER include/spdk/histogram_data.h 00:03:14.168 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.168 TEST_HEADER include/spdk/init.h 00:03:14.168 TEST_HEADER include/spdk/ioat.h 00:03:14.169 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.169 TEST_HEADER include/spdk/json.h 00:03:14.169 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.169 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.169 TEST_HEADER include/spdk/keyring.h 00:03:14.169 TEST_HEADER include/spdk/keyring_module.h 00:03:14.169 TEST_HEADER include/spdk/likely.h 00:03:14.169 TEST_HEADER include/spdk/log.h 00:03:14.169 TEST_HEADER include/spdk/memory.h 00:03:14.169 CC app/nvmf_tgt/nvmf_main.o 00:03:14.169 TEST_HEADER include/spdk/mmio.h 00:03:14.169 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.169 TEST_HEADER include/spdk/md5.h 00:03:14.169 TEST_HEADER include/spdk/lvol.h 00:03:14.169 TEST_HEADER include/spdk/nbd.h 00:03:14.169 TEST_HEADER include/spdk/net.h 00:03:14.169 TEST_HEADER include/spdk/nvme.h 00:03:14.169 TEST_HEADER include/spdk/notify.h 00:03:14.169 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.169 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.169 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.169 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.169 CC app/iscsi_tgt/iscsi_tgt.o 00:03:14.169 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.169 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.169 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.169 TEST_HEADER include/spdk/nvmf.h 00:03:14.169 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.169 TEST_HEADER include/spdk/opal.h 00:03:14.169 TEST_HEADER include/spdk/opal_spec.h 00:03:14.169 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.169 TEST_HEADER include/spdk/pipe.h 00:03:14.169 TEST_HEADER include/spdk/pci_ids.h 00:03:14.169 TEST_HEADER include/spdk/reduce.h 00:03:14.169 TEST_HEADER include/spdk/queue.h 00:03:14.169 TEST_HEADER include/spdk/rpc.h 00:03:14.169 TEST_HEADER include/spdk/scheduler.h 00:03:14.169 TEST_HEADER include/spdk/scsi.h 00:03:14.169 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.169 TEST_HEADER include/spdk/string.h 00:03:14.169 TEST_HEADER include/spdk/sock.h 00:03:14.169 TEST_HEADER include/spdk/thread.h 00:03:14.169 TEST_HEADER include/spdk/trace.h 00:03:14.169 TEST_HEADER include/spdk/stdinc.h 00:03:14.169 TEST_HEADER include/spdk/tree.h 00:03:14.169 TEST_HEADER include/spdk/trace_parser.h 00:03:14.169 TEST_HEADER include/spdk/uuid.h 00:03:14.169 TEST_HEADER include/spdk/ublk.h 00:03:14.169 TEST_HEADER include/spdk/util.h 00:03:14.169 TEST_HEADER include/spdk/version.h 00:03:14.169 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.169 TEST_HEADER include/spdk/vhost.h 00:03:14.169 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.169 TEST_HEADER include/spdk/xor.h 00:03:14.169 CC app/spdk_tgt/spdk_tgt.o 00:03:14.169 TEST_HEADER include/spdk/vmd.h 00:03:14.169 TEST_HEADER include/spdk/zipf.h 00:03:14.169 CXX test/cpp_headers/accel.o 00:03:14.169 CXX test/cpp_headers/assert.o 00:03:14.169 CXX test/cpp_headers/accel_module.o 00:03:14.169 CXX test/cpp_headers/barrier.o 00:03:14.169 CXX test/cpp_headers/base64.o 00:03:14.169 CXX test/cpp_headers/bit_array.o 00:03:14.169 CXX test/cpp_headers/bit_pool.o 00:03:14.169 CXX test/cpp_headers/bdev.o 00:03:14.169 CXX test/cpp_headers/bdev_zone.o 00:03:14.169 CXX test/cpp_headers/bdev_module.o 00:03:14.169 CXX test/cpp_headers/blob_bdev.o 00:03:14.169 CXX test/cpp_headers/blobfs_bdev.o 00:03:14.169 CXX test/cpp_headers/blobfs.o 00:03:14.169 CXX test/cpp_headers/conf.o 00:03:14.169 CXX test/cpp_headers/cpuset.o 00:03:14.169 CXX test/cpp_headers/blob.o 00:03:14.169 CXX test/cpp_headers/config.o 00:03:14.169 CXX test/cpp_headers/crc16.o 00:03:14.169 CXX test/cpp_headers/crc32.o 00:03:14.169 CXX test/cpp_headers/crc64.o 00:03:14.169 CXX test/cpp_headers/dif.o 00:03:14.169 CXX test/cpp_headers/endian.o 00:03:14.169 CXX test/cpp_headers/dma.o 00:03:14.169 CXX test/cpp_headers/env_dpdk.o 00:03:14.169 CXX test/cpp_headers/env.o 00:03:14.169 CXX test/cpp_headers/event.o 00:03:14.169 CXX test/cpp_headers/fd_group.o 00:03:14.169 CXX test/cpp_headers/fd.o 00:03:14.169 CXX test/cpp_headers/file.o 00:03:14.169 CXX test/cpp_headers/fsdev_module.o 00:03:14.169 CXX test/cpp_headers/fsdev.o 00:03:14.169 CXX test/cpp_headers/hexlify.o 00:03:14.169 CXX test/cpp_headers/ftl.o 00:03:14.169 CXX test/cpp_headers/gpt_spec.o 00:03:14.169 CXX test/cpp_headers/idxd.o 00:03:14.169 CXX test/cpp_headers/histogram_data.o 00:03:14.169 CXX test/cpp_headers/init.o 00:03:14.169 CXX test/cpp_headers/idxd_spec.o 00:03:14.169 CXX test/cpp_headers/ioat.o 00:03:14.169 CXX test/cpp_headers/ioat_spec.o 00:03:14.169 CXX test/cpp_headers/json.o 00:03:14.169 CXX test/cpp_headers/iscsi_spec.o 00:03:14.169 CXX test/cpp_headers/jsonrpc.o 00:03:14.169 CXX test/cpp_headers/keyring_module.o 00:03:14.169 CXX test/cpp_headers/likely.o 00:03:14.169 CXX test/cpp_headers/keyring.o 00:03:14.169 CXX test/cpp_headers/log.o 00:03:14.169 CXX test/cpp_headers/lvol.o 00:03:14.169 CXX test/cpp_headers/md5.o 00:03:14.169 CXX test/cpp_headers/memory.o 00:03:14.169 CXX test/cpp_headers/mmio.o 00:03:14.169 CXX test/cpp_headers/nbd.o 00:03:14.169 CXX test/cpp_headers/net.o 00:03:14.169 CXX test/cpp_headers/nvme_intel.o 00:03:14.169 CXX test/cpp_headers/notify.o 00:03:14.169 CXX test/cpp_headers/nvme.o 00:03:14.169 CXX test/cpp_headers/nvme_ocssd.o 00:03:14.169 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:14.169 CXX test/cpp_headers/nvme_spec.o 00:03:14.169 CXX test/cpp_headers/nvme_zns.o 00:03:14.169 CXX test/cpp_headers/nvmf_cmd.o 00:03:14.169 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:14.169 CXX test/cpp_headers/nvmf.o 00:03:14.169 CXX test/cpp_headers/nvmf_spec.o 00:03:14.169 CXX test/cpp_headers/nvmf_transport.o 00:03:14.169 CXX test/cpp_headers/opal.o 00:03:14.169 CXX test/cpp_headers/opal_spec.o 00:03:14.169 CC examples/util/zipf/zipf.o 00:03:14.169 CC test/app/histogram_perf/histogram_perf.o 00:03:14.169 CC test/thread/poller_perf/poller_perf.o 00:03:14.169 CC test/env/memory/memory_ut.o 00:03:14.169 CC test/env/vtophys/vtophys.o 00:03:14.169 CXX test/cpp_headers/pci_ids.o 00:03:14.169 CC test/app/jsoncat/jsoncat.o 00:03:14.444 CC test/app/stub/stub.o 00:03:14.444 CC app/fio/nvme/fio_plugin.o 00:03:14.444 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.444 CC test/env/pci/pci_ut.o 00:03:14.444 CC examples/ioat/perf/perf.o 00:03:14.444 CC examples/ioat/verify/verify.o 00:03:14.444 CC test/dma/test_dma/test_dma.o 00:03:14.444 CC app/fio/bdev/fio_plugin.o 00:03:14.444 CC test/app/bdev_svc/bdev_svc.o 00:03:14.444 LINK spdk_lspci 00:03:14.709 LINK interrupt_tgt 00:03:14.709 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:14.709 LINK rpc_client_test 00:03:14.709 LINK spdk_trace_record 00:03:14.709 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.709 LINK iscsi_tgt 00:03:14.709 LINK zipf 00:03:14.709 LINK histogram_perf 00:03:14.709 LINK vtophys 00:03:14.709 LINK spdk_nvme_discover 00:03:14.709 LINK spdk_tgt 00:03:14.709 LINK nvmf_tgt 00:03:14.709 CXX test/cpp_headers/pipe.o 00:03:14.709 LINK jsoncat 00:03:14.709 CXX test/cpp_headers/queue.o 00:03:14.709 CXX test/cpp_headers/reduce.o 00:03:14.709 CXX test/cpp_headers/rpc.o 00:03:14.709 CXX test/cpp_headers/scheduler.o 00:03:14.709 CXX test/cpp_headers/scsi_spec.o 00:03:14.709 CXX test/cpp_headers/scsi.o 00:03:14.709 CXX test/cpp_headers/sock.o 00:03:14.709 CXX test/cpp_headers/stdinc.o 00:03:14.709 CXX test/cpp_headers/string.o 00:03:14.709 CXX test/cpp_headers/thread.o 00:03:14.968 CXX test/cpp_headers/trace.o 00:03:14.968 CXX test/cpp_headers/trace_parser.o 00:03:14.968 CXX test/cpp_headers/tree.o 00:03:14.968 CXX test/cpp_headers/ublk.o 00:03:14.968 CXX test/cpp_headers/util.o 00:03:14.968 CXX test/cpp_headers/uuid.o 00:03:14.968 CXX test/cpp_headers/version.o 00:03:14.968 CXX test/cpp_headers/vfio_user_pci.o 00:03:14.968 CXX test/cpp_headers/vfio_user_spec.o 00:03:14.968 CXX test/cpp_headers/vhost.o 00:03:14.968 CXX test/cpp_headers/vmd.o 00:03:14.968 CXX test/cpp_headers/xor.o 00:03:14.968 CXX test/cpp_headers/zipf.o 00:03:14.968 LINK spdk_dd 00:03:14.968 LINK verify 00:03:14.968 LINK poller_perf 00:03:14.968 LINK env_dpdk_post_init 00:03:14.968 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:14.968 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:14.968 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:14.968 LINK stub 00:03:14.968 LINK ioat_perf 00:03:14.968 LINK bdev_svc 00:03:15.226 LINK spdk_trace 00:03:15.226 LINK pci_ut 00:03:15.226 CC examples/idxd/perf/perf.o 00:03:15.226 CC examples/vmd/led/led.o 00:03:15.226 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.226 LINK spdk_nvme_identify 00:03:15.226 CC examples/sock/hello_world/hello_sock.o 00:03:15.226 LINK spdk_bdev 00:03:15.226 LINK spdk_nvme 00:03:15.226 LINK nvme_fuzz 00:03:15.226 CC examples/thread/thread/thread_ex.o 00:03:15.485 LINK spdk_nvme_perf 00:03:15.485 CC test/event/reactor_perf/reactor_perf.o 00:03:15.485 CC test/event/reactor/reactor.o 00:03:15.485 CC test/event/event_perf/event_perf.o 00:03:15.485 CC test/event/app_repeat/app_repeat.o 00:03:15.485 CC test/event/scheduler/scheduler.o 00:03:15.485 LINK spdk_top 00:03:15.485 LINK mem_callbacks 00:03:15.485 LINK test_dma 00:03:15.485 LINK lsvmd 00:03:15.485 LINK led 00:03:15.485 LINK vhost_fuzz 00:03:15.485 LINK hello_sock 00:03:15.485 LINK reactor_perf 00:03:15.485 CC app/vhost/vhost.o 00:03:15.485 LINK reactor 00:03:15.485 LINK event_perf 00:03:15.485 LINK app_repeat 00:03:15.485 LINK idxd_perf 00:03:15.485 LINK thread 00:03:15.744 LINK scheduler 00:03:15.744 LINK vhost 00:03:16.004 LINK memory_ut 00:03:16.004 CC test/nvme/reserve/reserve.o 00:03:16.004 CC test/nvme/overhead/overhead.o 00:03:16.004 CC test/nvme/reset/reset.o 00:03:16.004 CC test/nvme/err_injection/err_injection.o 00:03:16.004 CC test/nvme/aer/aer.o 00:03:16.004 CC test/nvme/sgl/sgl.o 00:03:16.004 CC test/nvme/connect_stress/connect_stress.o 00:03:16.004 CC test/accel/dif/dif.o 00:03:16.004 CC test/nvme/simple_copy/simple_copy.o 00:03:16.004 CC test/nvme/startup/startup.o 00:03:16.004 CC test/nvme/e2edp/nvme_dp.o 00:03:16.004 CC test/nvme/cuse/cuse.o 00:03:16.004 CC test/nvme/compliance/nvme_compliance.o 00:03:16.004 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.004 CC test/nvme/boot_partition/boot_partition.o 00:03:16.004 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.004 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.004 CC test/nvme/fdp/fdp.o 00:03:16.004 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.004 CC examples/nvme/arbitration/arbitration.o 00:03:16.004 CC examples/nvme/abort/abort.o 00:03:16.004 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.004 CC examples/nvme/hello_world/hello_world.o 00:03:16.004 CC examples/nvme/hotplug/hotplug.o 00:03:16.004 CC examples/nvme/reconnect/reconnect.o 00:03:16.004 CC test/blobfs/mkfs/mkfs.o 00:03:16.004 CC examples/accel/perf/accel_perf.o 00:03:16.004 CC test/lvol/esnap/esnap.o 00:03:16.004 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:16.004 CC examples/blob/cli/blobcli.o 00:03:16.004 CC examples/blob/hello_world/hello_blob.o 00:03:16.263 LINK startup 00:03:16.263 LINK pmr_persistence 00:03:16.263 LINK doorbell_aers 00:03:16.263 LINK boot_partition 00:03:16.263 LINK reserve 00:03:16.263 LINK err_injection 00:03:16.263 LINK connect_stress 00:03:16.263 LINK cmb_copy 00:03:16.263 LINK fused_ordering 00:03:16.263 LINK mkfs 00:03:16.263 LINK simple_copy 00:03:16.263 LINK reset 00:03:16.263 LINK hello_world 00:03:16.263 LINK sgl 00:03:16.263 LINK hotplug 00:03:16.263 LINK nvme_dp 00:03:16.263 LINK aer 00:03:16.263 LINK overhead 00:03:16.263 LINK nvme_compliance 00:03:16.263 LINK fdp 00:03:16.263 LINK reconnect 00:03:16.263 LINK abort 00:03:16.263 LINK arbitration 00:03:16.263 LINK hello_fsdev 00:03:16.263 LINK hello_blob 00:03:16.521 LINK nvme_manage 00:03:16.521 LINK accel_perf 00:03:16.521 LINK iscsi_fuzz 00:03:16.521 LINK dif 00:03:16.521 LINK blobcli 00:03:17.088 CC examples/bdev/hello_world/hello_bdev.o 00:03:17.088 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.088 LINK cuse 00:03:17.088 CC test/bdev/bdevio/bdevio.o 00:03:17.346 LINK hello_bdev 00:03:17.346 LINK bdevio 00:03:17.606 LINK bdevperf 00:03:18.173 CC examples/nvmf/nvmf/nvmf.o 00:03:18.432 LINK nvmf 00:03:19.808 LINK esnap 00:03:19.808 00:03:19.808 real 0m55.828s 00:03:19.808 user 8m24.595s 00:03:19.808 sys 3m48.360s 00:03:19.808 03:50:18 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:19.808 03:50:18 make -- common/autotest_common.sh@10 -- $ set +x 00:03:19.808 ************************************ 00:03:19.808 END TEST make 00:03:19.808 ************************************ 00:03:19.808 03:50:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:19.808 03:50:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:19.808 03:50:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:19.808 03:50:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.808 03:50:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:19.808 03:50:19 -- pm/common@44 -- $ pid=3979744 00:03:19.808 03:50:19 -- pm/common@50 -- $ kill -TERM 3979744 00:03:19.808 03:50:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.808 03:50:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:19.808 03:50:19 -- pm/common@44 -- $ pid=3979745 00:03:19.808 03:50:19 -- pm/common@50 -- $ kill -TERM 3979745 00:03:19.808 03:50:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.808 03:50:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:19.808 03:50:19 -- pm/common@44 -- $ pid=3979747 00:03:19.808 03:50:19 -- pm/common@50 -- $ kill -TERM 3979747 00:03:19.808 03:50:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.808 03:50:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:19.808 03:50:19 -- pm/common@44 -- $ pid=3979774 00:03:19.808 03:50:19 -- pm/common@50 -- $ sudo -E kill -TERM 3979774 00:03:19.808 03:50:19 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:19.808 03:50:19 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:20.068 03:50:19 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:20.068 03:50:19 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:20.068 03:50:19 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:20.068 03:50:19 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:20.068 03:50:19 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:20.068 03:50:19 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:20.068 03:50:19 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:20.068 03:50:19 -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.068 03:50:19 -- scripts/common.sh@336 -- # read -ra ver1 00:03:20.068 03:50:19 -- scripts/common.sh@337 -- # IFS=.-: 00:03:20.068 03:50:19 -- scripts/common.sh@337 -- # read -ra ver2 00:03:20.068 03:50:19 -- scripts/common.sh@338 -- # local 'op=<' 00:03:20.068 03:50:19 -- scripts/common.sh@340 -- # ver1_l=2 00:03:20.068 03:50:19 -- scripts/common.sh@341 -- # ver2_l=1 00:03:20.068 03:50:19 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:20.068 03:50:19 -- scripts/common.sh@344 -- # case "$op" in 00:03:20.068 03:50:19 -- scripts/common.sh@345 -- # : 1 00:03:20.068 03:50:19 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:20.068 03:50:19 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.068 03:50:19 -- scripts/common.sh@365 -- # decimal 1 00:03:20.068 03:50:19 -- scripts/common.sh@353 -- # local d=1 00:03:20.068 03:50:19 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.068 03:50:19 -- scripts/common.sh@355 -- # echo 1 00:03:20.068 03:50:19 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:20.068 03:50:19 -- scripts/common.sh@366 -- # decimal 2 00:03:20.068 03:50:19 -- scripts/common.sh@353 -- # local d=2 00:03:20.068 03:50:19 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.068 03:50:19 -- scripts/common.sh@355 -- # echo 2 00:03:20.068 03:50:19 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:20.068 03:50:19 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:20.068 03:50:19 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:20.068 03:50:19 -- scripts/common.sh@368 -- # return 0 00:03:20.068 03:50:19 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.068 03:50:19 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.068 --rc genhtml_branch_coverage=1 00:03:20.068 --rc genhtml_function_coverage=1 00:03:20.068 --rc genhtml_legend=1 00:03:20.068 --rc geninfo_all_blocks=1 00:03:20.068 --rc geninfo_unexecuted_blocks=1 00:03:20.068 00:03:20.068 ' 00:03:20.068 03:50:19 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.068 --rc genhtml_branch_coverage=1 00:03:20.068 --rc genhtml_function_coverage=1 00:03:20.068 --rc genhtml_legend=1 00:03:20.068 --rc geninfo_all_blocks=1 00:03:20.068 --rc geninfo_unexecuted_blocks=1 00:03:20.068 00:03:20.068 ' 00:03:20.068 03:50:19 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.068 --rc genhtml_branch_coverage=1 00:03:20.068 --rc genhtml_function_coverage=1 00:03:20.068 --rc genhtml_legend=1 00:03:20.068 --rc geninfo_all_blocks=1 00:03:20.068 --rc geninfo_unexecuted_blocks=1 00:03:20.068 00:03:20.068 ' 00:03:20.068 03:50:19 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.068 --rc genhtml_branch_coverage=1 00:03:20.068 --rc genhtml_function_coverage=1 00:03:20.068 --rc genhtml_legend=1 00:03:20.068 --rc geninfo_all_blocks=1 00:03:20.068 --rc geninfo_unexecuted_blocks=1 00:03:20.068 00:03:20.068 ' 00:03:20.068 03:50:19 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:20.068 03:50:19 -- nvmf/common.sh@7 -- # uname -s 00:03:20.068 03:50:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:20.068 03:50:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:20.068 03:50:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:20.068 03:50:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:20.068 03:50:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:20.068 03:50:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:20.068 03:50:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:20.068 03:50:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:20.068 03:50:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:20.068 03:50:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:20.068 03:50:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:20.068 03:50:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:20.068 03:50:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:20.068 03:50:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:20.068 03:50:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:20.068 03:50:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:20.068 03:50:19 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:20.068 03:50:19 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:20.068 03:50:19 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:20.068 03:50:19 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:20.068 03:50:19 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:20.068 03:50:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.068 03:50:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.068 03:50:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.068 03:50:19 -- paths/export.sh@5 -- # export PATH 00:03:20.068 03:50:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.068 03:50:19 -- nvmf/common.sh@51 -- # : 0 00:03:20.068 03:50:19 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:20.068 03:50:19 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:20.068 03:50:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:20.068 03:50:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:20.068 03:50:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:20.068 03:50:19 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:20.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:20.069 03:50:19 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:20.069 03:50:19 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:20.069 03:50:19 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:20.069 03:50:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:20.069 03:50:19 -- spdk/autotest.sh@32 -- # uname -s 00:03:20.069 03:50:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:20.069 03:50:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:20.069 03:50:19 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:20.069 03:50:19 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:20.069 03:50:19 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:20.069 03:50:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:20.069 03:50:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:20.069 03:50:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:20.069 03:50:19 -- spdk/autotest.sh@48 -- # udevadm_pid=4041839 00:03:20.069 03:50:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:20.069 03:50:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:20.069 03:50:19 -- pm/common@17 -- # local monitor 00:03:20.069 03:50:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.069 03:50:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.069 03:50:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.069 03:50:19 -- pm/common@21 -- # date +%s 00:03:20.069 03:50:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.069 03:50:19 -- pm/common@21 -- # date +%s 00:03:20.069 03:50:19 -- pm/common@25 -- # sleep 1 00:03:20.069 03:50:19 -- pm/common@21 -- # date +%s 00:03:20.069 03:50:19 -- pm/common@21 -- # date +%s 00:03:20.069 03:50:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799019 00:03:20.069 03:50:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799019 00:03:20.069 03:50:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799019 00:03:20.069 03:50:19 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799019 00:03:20.069 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799019_collect-cpu-load.pm.log 00:03:20.069 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799019_collect-vmstat.pm.log 00:03:20.069 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799019_collect-cpu-temp.pm.log 00:03:20.069 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799019_collect-bmc-pm.bmc.pm.log 00:03:21.007 03:50:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:21.007 03:50:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:21.007 03:50:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:21.007 03:50:20 -- common/autotest_common.sh@10 -- # set +x 00:03:21.007 03:50:20 -- spdk/autotest.sh@59 -- # create_test_list 00:03:21.007 03:50:20 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:21.007 03:50:20 -- common/autotest_common.sh@10 -- # set +x 00:03:21.266 03:50:20 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:21.266 03:50:20 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:21.266 03:50:20 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:21.266 03:50:20 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:21.266 03:50:20 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:21.266 03:50:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:21.266 03:50:20 -- common/autotest_common.sh@1457 -- # uname 00:03:21.266 03:50:20 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:21.266 03:50:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:21.266 03:50:20 -- common/autotest_common.sh@1477 -- # uname 00:03:21.266 03:50:20 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:21.266 03:50:20 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:21.266 03:50:20 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:21.266 lcov: LCOV version 1.15 00:03:21.266 03:50:20 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:33.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:33.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:45.791 03:50:44 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:45.791 03:50:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.791 03:50:44 -- common/autotest_common.sh@10 -- # set +x 00:03:45.791 03:50:44 -- spdk/autotest.sh@78 -- # rm -f 00:03:45.791 03:50:44 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.098 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:49.098 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:49.098 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:49.098 03:50:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:49.098 03:50:48 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:49.098 03:50:48 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:49.098 03:50:48 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:49.098 03:50:48 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:49.098 03:50:48 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:49.098 03:50:48 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:49.098 03:50:48 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:49.098 03:50:48 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:49.098 03:50:48 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:49.098 03:50:48 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:49.098 03:50:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.098 03:50:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:49.098 03:50:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:49.098 03:50:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:49.098 03:50:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:49.098 03:50:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:49.098 03:50:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:49.098 03:50:48 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:49.098 No valid GPT data, bailing 00:03:49.098 03:50:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:49.098 03:50:48 -- scripts/common.sh@394 -- # pt= 00:03:49.098 03:50:48 -- scripts/common.sh@395 -- # return 1 00:03:49.098 03:50:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:49.098 1+0 records in 00:03:49.098 1+0 records out 00:03:49.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00155938 s, 672 MB/s 00:03:49.098 03:50:48 -- spdk/autotest.sh@105 -- # sync 00:03:49.098 03:50:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:49.098 03:50:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:49.098 03:50:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:54.374 03:50:53 -- spdk/autotest.sh@111 -- # uname -s 00:03:54.374 03:50:53 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:54.374 03:50:53 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:54.374 03:50:53 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:57.666 Hugepages 00:03:57.666 node hugesize free / total 00:03:57.666 node0 1048576kB 0 / 0 00:03:57.666 node0 2048kB 0 / 0 00:03:57.666 node1 1048576kB 0 / 0 00:03:57.666 node1 2048kB 0 / 0 00:03:57.666 00:03:57.666 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.666 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:57.666 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:57.666 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:57.666 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:57.666 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:57.666 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:57.666 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:57.666 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:57.666 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:57.666 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:57.666 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:57.666 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:57.666 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:57.666 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:57.666 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:57.666 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:57.666 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:57.666 03:50:56 -- spdk/autotest.sh@117 -- # uname -s 00:03:57.666 03:50:56 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:57.666 03:50:56 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:57.666 03:50:56 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.201 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:00.201 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.137 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.137 03:51:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:02.515 03:51:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:02.515 03:51:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:02.515 03:51:01 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.515 03:51:01 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:02.515 03:51:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:02.515 03:51:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:02.515 03:51:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.515 03:51:01 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:02.515 03:51:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:02.515 03:51:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:02.515 03:51:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:02.515 03:51:01 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.051 Waiting for block devices as requested 00:04:05.051 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:05.309 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:05.309 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:05.309 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:05.568 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:05.568 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:05.568 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:05.568 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:05.827 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:05.827 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:05.827 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:06.094 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:06.094 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:06.094 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:06.355 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:06.355 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:06.355 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:06.355 03:51:05 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:06.355 03:51:05 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:06.355 03:51:05 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:06.355 03:51:05 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:06.355 03:51:05 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:06.355 03:51:05 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:06.355 03:51:05 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:06.614 03:51:05 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:06.614 03:51:05 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:06.614 03:51:05 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:06.614 03:51:05 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:06.614 03:51:05 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:06.614 03:51:05 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:06.614 03:51:05 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:06.614 03:51:05 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:06.614 03:51:05 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:06.614 03:51:05 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:06.614 03:51:05 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:06.614 03:51:05 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.614 03:51:05 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:06.614 03:51:05 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:06.614 03:51:05 -- common/autotest_common.sh@1543 -- # continue 00:04:06.614 03:51:05 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:06.614 03:51:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.614 03:51:05 -- common/autotest_common.sh@10 -- # set +x 00:04:06.614 03:51:05 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:06.614 03:51:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.614 03:51:05 -- common/autotest_common.sh@10 -- # set +x 00:04:06.614 03:51:05 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.903 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.903 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:10.473 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:10.473 03:51:09 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:10.473 03:51:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.473 03:51:09 -- common/autotest_common.sh@10 -- # set +x 00:04:10.473 03:51:09 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:10.473 03:51:09 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:10.473 03:51:09 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:10.473 03:51:09 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:10.473 03:51:09 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:10.473 03:51:09 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:10.473 03:51:09 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:10.473 03:51:09 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:10.473 03:51:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:10.473 03:51:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:10.473 03:51:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.473 03:51:09 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:10.473 03:51:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:10.473 03:51:09 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:10.473 03:51:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:10.473 03:51:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:10.473 03:51:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:10.473 03:51:09 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:10.473 03:51:09 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:10.473 03:51:09 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:10.473 03:51:09 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:10.473 03:51:09 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:10.473 03:51:09 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:10.473 03:51:09 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=4056290 00:04:10.473 03:51:09 -- common/autotest_common.sh@1585 -- # waitforlisten 4056290 00:04:10.473 03:51:09 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.473 03:51:09 -- common/autotest_common.sh@835 -- # '[' -z 4056290 ']' 00:04:10.473 03:51:09 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.473 03:51:09 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.473 03:51:09 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.473 03:51:09 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.473 03:51:09 -- common/autotest_common.sh@10 -- # set +x 00:04:10.732 [2024-12-10 03:51:09.786664] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:10.732 [2024-12-10 03:51:09.786713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4056290 ] 00:04:10.732 [2024-12-10 03:51:09.858950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.732 [2024-12-10 03:51:09.900452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.991 03:51:10 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.991 03:51:10 -- common/autotest_common.sh@868 -- # return 0 00:04:10.991 03:51:10 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:10.991 03:51:10 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:10.991 03:51:10 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:14.280 nvme0n1 00:04:14.280 03:51:13 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:14.280 [2024-12-10 03:51:13.314766] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:14.280 [2024-12-10 03:51:13.314795] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:14.280 request: 00:04:14.280 { 00:04:14.280 "nvme_ctrlr_name": "nvme0", 00:04:14.280 "password": "test", 00:04:14.280 "method": "bdev_nvme_opal_revert", 00:04:14.280 "req_id": 1 00:04:14.280 } 00:04:14.280 Got JSON-RPC error response 00:04:14.280 response: 00:04:14.280 { 00:04:14.280 "code": -32603, 00:04:14.280 "message": "Internal error" 00:04:14.280 } 00:04:14.280 03:51:13 -- common/autotest_common.sh@1591 -- # true 00:04:14.280 03:51:13 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:14.280 03:51:13 -- common/autotest_common.sh@1595 -- # killprocess 4056290 00:04:14.280 03:51:13 -- common/autotest_common.sh@954 -- # '[' -z 4056290 ']' 00:04:14.280 03:51:13 -- common/autotest_common.sh@958 -- # kill -0 4056290 00:04:14.280 03:51:13 -- common/autotest_common.sh@959 -- # uname 00:04:14.280 03:51:13 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.280 03:51:13 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4056290 00:04:14.280 03:51:13 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.280 03:51:13 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.280 03:51:13 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4056290' 00:04:14.280 killing process with pid 4056290 00:04:14.280 03:51:13 -- common/autotest_common.sh@973 -- # kill 4056290 00:04:14.280 03:51:13 -- common/autotest_common.sh@978 -- # wait 4056290 00:04:16.185 03:51:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:16.185 03:51:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:16.185 03:51:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:16.185 03:51:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:16.185 03:51:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:16.185 03:51:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.185 03:51:15 -- common/autotest_common.sh@10 -- # set +x 00:04:16.185 03:51:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:16.185 03:51:15 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:16.185 03:51:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.185 03:51:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.185 03:51:15 -- common/autotest_common.sh@10 -- # set +x 00:04:16.185 ************************************ 00:04:16.185 START TEST env 00:04:16.185 ************************************ 00:04:16.185 03:51:15 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:16.185 * Looking for test storage... 00:04:16.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:16.185 03:51:15 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.185 03:51:15 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.185 03:51:15 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.185 03:51:15 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.185 03:51:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.185 03:51:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.185 03:51:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.185 03:51:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.185 03:51:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.185 03:51:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.185 03:51:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.185 03:51:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.185 03:51:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.185 03:51:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.185 03:51:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.185 03:51:15 env -- scripts/common.sh@344 -- # case "$op" in 00:04:16.185 03:51:15 env -- scripts/common.sh@345 -- # : 1 00:04:16.185 03:51:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.185 03:51:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.185 03:51:15 env -- scripts/common.sh@365 -- # decimal 1 00:04:16.185 03:51:15 env -- scripts/common.sh@353 -- # local d=1 00:04:16.185 03:51:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.185 03:51:15 env -- scripts/common.sh@355 -- # echo 1 00:04:16.185 03:51:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.185 03:51:15 env -- scripts/common.sh@366 -- # decimal 2 00:04:16.185 03:51:15 env -- scripts/common.sh@353 -- # local d=2 00:04:16.185 03:51:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.185 03:51:15 env -- scripts/common.sh@355 -- # echo 2 00:04:16.185 03:51:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.185 03:51:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.185 03:51:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.185 03:51:15 env -- scripts/common.sh@368 -- # return 0 00:04:16.185 03:51:15 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.185 03:51:15 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.185 --rc genhtml_branch_coverage=1 00:04:16.185 --rc genhtml_function_coverage=1 00:04:16.185 --rc genhtml_legend=1 00:04:16.185 --rc geninfo_all_blocks=1 00:04:16.185 --rc geninfo_unexecuted_blocks=1 00:04:16.185 00:04:16.185 ' 00:04:16.185 03:51:15 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.185 --rc genhtml_branch_coverage=1 00:04:16.185 --rc genhtml_function_coverage=1 00:04:16.185 --rc genhtml_legend=1 00:04:16.185 --rc geninfo_all_blocks=1 00:04:16.185 --rc geninfo_unexecuted_blocks=1 00:04:16.185 00:04:16.185 ' 00:04:16.185 03:51:15 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.185 --rc genhtml_branch_coverage=1 00:04:16.186 --rc genhtml_function_coverage=1 00:04:16.186 --rc genhtml_legend=1 00:04:16.186 --rc geninfo_all_blocks=1 00:04:16.186 --rc geninfo_unexecuted_blocks=1 00:04:16.186 00:04:16.186 ' 00:04:16.186 03:51:15 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.186 --rc genhtml_branch_coverage=1 00:04:16.186 --rc genhtml_function_coverage=1 00:04:16.186 --rc genhtml_legend=1 00:04:16.186 --rc geninfo_all_blocks=1 00:04:16.186 --rc geninfo_unexecuted_blocks=1 00:04:16.186 00:04:16.186 ' 00:04:16.186 03:51:15 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:16.186 03:51:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.186 03:51:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.186 03:51:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.186 ************************************ 00:04:16.186 START TEST env_memory 00:04:16.186 ************************************ 00:04:16.186 03:51:15 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:16.186 00:04:16.186 00:04:16.186 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.186 http://cunit.sourceforge.net/ 00:04:16.186 00:04:16.186 00:04:16.186 Suite: memory 00:04:16.186 Test: alloc and free memory map ...[2024-12-10 03:51:15.310735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:16.186 passed 00:04:16.186 Test: mem map translation ...[2024-12-10 03:51:15.329581] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:16.186 [2024-12-10 03:51:15.329606] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:16.186 [2024-12-10 03:51:15.329654] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:16.186 [2024-12-10 03:51:15.329660] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:16.186 passed 00:04:16.186 Test: mem map registration ...[2024-12-10 03:51:15.365332] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:16.186 [2024-12-10 03:51:15.365344] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:16.186 passed 00:04:16.186 Test: mem map adjacent registrations ...passed 00:04:16.186 00:04:16.186 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.186 suites 1 1 n/a 0 0 00:04:16.186 tests 4 4 4 0 0 00:04:16.186 asserts 152 152 152 0 n/a 00:04:16.186 00:04:16.186 Elapsed time = 0.135 seconds 00:04:16.186 00:04:16.186 real 0m0.147s 00:04:16.186 user 0m0.139s 00:04:16.186 sys 0m0.008s 00:04:16.186 03:51:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.186 03:51:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:16.186 ************************************ 00:04:16.186 END TEST env_memory 00:04:16.186 ************************************ 00:04:16.186 03:51:15 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:16.186 03:51:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.186 03:51:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.186 03:51:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.446 ************************************ 00:04:16.446 START TEST env_vtophys 00:04:16.446 ************************************ 00:04:16.446 03:51:15 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:16.446 EAL: lib.eal log level changed from notice to debug 00:04:16.446 EAL: Detected lcore 0 as core 0 on socket 0 00:04:16.446 EAL: Detected lcore 1 as core 1 on socket 0 00:04:16.446 EAL: Detected lcore 2 as core 2 on socket 0 00:04:16.446 EAL: Detected lcore 3 as core 3 on socket 0 00:04:16.446 EAL: Detected lcore 4 as core 4 on socket 0 00:04:16.446 EAL: Detected lcore 5 as core 5 on socket 0 00:04:16.446 EAL: Detected lcore 6 as core 6 on socket 0 00:04:16.446 EAL: Detected lcore 7 as core 8 on socket 0 00:04:16.446 EAL: Detected lcore 8 as core 9 on socket 0 00:04:16.446 EAL: Detected lcore 9 as core 10 on socket 0 00:04:16.446 EAL: Detected lcore 10 as core 11 on socket 0 00:04:16.446 EAL: Detected lcore 11 as core 12 on socket 0 00:04:16.446 EAL: Detected lcore 12 as core 13 on socket 0 00:04:16.446 EAL: Detected lcore 13 as core 16 on socket 0 00:04:16.446 EAL: Detected lcore 14 as core 17 on socket 0 00:04:16.446 EAL: Detected lcore 15 as core 18 on socket 0 00:04:16.446 EAL: Detected lcore 16 as core 19 on socket 0 00:04:16.446 EAL: Detected lcore 17 as core 20 on socket 0 00:04:16.446 EAL: Detected lcore 18 as core 21 on socket 0 00:04:16.446 EAL: Detected lcore 19 as core 25 on socket 0 00:04:16.446 EAL: Detected lcore 20 as core 26 on socket 0 00:04:16.446 EAL: Detected lcore 21 as core 27 on socket 0 00:04:16.446 EAL: Detected lcore 22 as core 28 on socket 0 00:04:16.446 EAL: Detected lcore 23 as core 29 on socket 0 00:04:16.446 EAL: Detected lcore 24 as core 0 on socket 1 00:04:16.446 EAL: Detected lcore 25 as core 1 on socket 1 00:04:16.446 EAL: Detected lcore 26 as core 2 on socket 1 00:04:16.446 EAL: Detected lcore 27 as core 3 on socket 1 00:04:16.446 EAL: Detected lcore 28 as core 4 on socket 1 00:04:16.446 EAL: Detected lcore 29 as core 5 on socket 1 00:04:16.446 EAL: Detected lcore 30 as core 6 on socket 1 00:04:16.446 EAL: Detected lcore 31 as core 8 on socket 1 00:04:16.446 EAL: Detected lcore 32 as core 9 on socket 1 00:04:16.446 EAL: Detected lcore 33 as core 10 on socket 1 00:04:16.446 EAL: Detected lcore 34 as core 11 on socket 1 00:04:16.446 EAL: Detected lcore 35 as core 12 on socket 1 00:04:16.446 EAL: Detected lcore 36 as core 13 on socket 1 00:04:16.446 EAL: Detected lcore 37 as core 16 on socket 1 00:04:16.446 EAL: Detected lcore 38 as core 17 on socket 1 00:04:16.446 EAL: Detected lcore 39 as core 18 on socket 1 00:04:16.446 EAL: Detected lcore 40 as core 19 on socket 1 00:04:16.446 EAL: Detected lcore 41 as core 20 on socket 1 00:04:16.446 EAL: Detected lcore 42 as core 21 on socket 1 00:04:16.446 EAL: Detected lcore 43 as core 25 on socket 1 00:04:16.446 EAL: Detected lcore 44 as core 26 on socket 1 00:04:16.446 EAL: Detected lcore 45 as core 27 on socket 1 00:04:16.446 EAL: Detected lcore 46 as core 28 on socket 1 00:04:16.446 EAL: Detected lcore 47 as core 29 on socket 1 00:04:16.446 EAL: Detected lcore 48 as core 0 on socket 0 00:04:16.446 EAL: Detected lcore 49 as core 1 on socket 0 00:04:16.446 EAL: Detected lcore 50 as core 2 on socket 0 00:04:16.446 EAL: Detected lcore 51 as core 3 on socket 0 00:04:16.446 EAL: Detected lcore 52 as core 4 on socket 0 00:04:16.446 EAL: Detected lcore 53 as core 5 on socket 0 00:04:16.446 EAL: Detected lcore 54 as core 6 on socket 0 00:04:16.446 EAL: Detected lcore 55 as core 8 on socket 0 00:04:16.446 EAL: Detected lcore 56 as core 9 on socket 0 00:04:16.446 EAL: Detected lcore 57 as core 10 on socket 0 00:04:16.446 EAL: Detected lcore 58 as core 11 on socket 0 00:04:16.446 EAL: Detected lcore 59 as core 12 on socket 0 00:04:16.446 EAL: Detected lcore 60 as core 13 on socket 0 00:04:16.446 EAL: Detected lcore 61 as core 16 on socket 0 00:04:16.446 EAL: Detected lcore 62 as core 17 on socket 0 00:04:16.446 EAL: Detected lcore 63 as core 18 on socket 0 00:04:16.446 EAL: Detected lcore 64 as core 19 on socket 0 00:04:16.446 EAL: Detected lcore 65 as core 20 on socket 0 00:04:16.446 EAL: Detected lcore 66 as core 21 on socket 0 00:04:16.446 EAL: Detected lcore 67 as core 25 on socket 0 00:04:16.446 EAL: Detected lcore 68 as core 26 on socket 0 00:04:16.446 EAL: Detected lcore 69 as core 27 on socket 0 00:04:16.446 EAL: Detected lcore 70 as core 28 on socket 0 00:04:16.446 EAL: Detected lcore 71 as core 29 on socket 0 00:04:16.446 EAL: Detected lcore 72 as core 0 on socket 1 00:04:16.446 EAL: Detected lcore 73 as core 1 on socket 1 00:04:16.446 EAL: Detected lcore 74 as core 2 on socket 1 00:04:16.446 EAL: Detected lcore 75 as core 3 on socket 1 00:04:16.446 EAL: Detected lcore 76 as core 4 on socket 1 00:04:16.446 EAL: Detected lcore 77 as core 5 on socket 1 00:04:16.446 EAL: Detected lcore 78 as core 6 on socket 1 00:04:16.446 EAL: Detected lcore 79 as core 8 on socket 1 00:04:16.446 EAL: Detected lcore 80 as core 9 on socket 1 00:04:16.446 EAL: Detected lcore 81 as core 10 on socket 1 00:04:16.446 EAL: Detected lcore 82 as core 11 on socket 1 00:04:16.446 EAL: Detected lcore 83 as core 12 on socket 1 00:04:16.446 EAL: Detected lcore 84 as core 13 on socket 1 00:04:16.446 EAL: Detected lcore 85 as core 16 on socket 1 00:04:16.446 EAL: Detected lcore 86 as core 17 on socket 1 00:04:16.446 EAL: Detected lcore 87 as core 18 on socket 1 00:04:16.446 EAL: Detected lcore 88 as core 19 on socket 1 00:04:16.446 EAL: Detected lcore 89 as core 20 on socket 1 00:04:16.446 EAL: Detected lcore 90 as core 21 on socket 1 00:04:16.446 EAL: Detected lcore 91 as core 25 on socket 1 00:04:16.446 EAL: Detected lcore 92 as core 26 on socket 1 00:04:16.446 EAL: Detected lcore 93 as core 27 on socket 1 00:04:16.446 EAL: Detected lcore 94 as core 28 on socket 1 00:04:16.446 EAL: Detected lcore 95 as core 29 on socket 1 00:04:16.446 EAL: Maximum logical cores by configuration: 128 00:04:16.446 EAL: Detected CPU lcores: 96 00:04:16.446 EAL: Detected NUMA nodes: 2 00:04:16.446 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:16.446 EAL: Detected shared linkage of DPDK 00:04:16.446 EAL: No shared files mode enabled, IPC will be disabled 00:04:16.446 EAL: Bus pci wants IOVA as 'DC' 00:04:16.446 EAL: Buses did not request a specific IOVA mode. 00:04:16.446 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:16.446 EAL: Selected IOVA mode 'VA' 00:04:16.446 EAL: Probing VFIO support... 00:04:16.446 EAL: IOMMU type 1 (Type 1) is supported 00:04:16.446 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:16.446 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:16.446 EAL: VFIO support initialized 00:04:16.446 EAL: Ask a virtual area of 0x2e000 bytes 00:04:16.446 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:16.446 EAL: Setting up physically contiguous memory... 00:04:16.446 EAL: Setting maximum number of open files to 524288 00:04:16.446 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:16.446 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:16.446 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:16.446 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.446 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:16.446 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.446 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.446 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:16.446 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:16.446 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.446 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:16.446 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.446 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.446 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:16.446 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:16.446 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.446 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:16.446 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.446 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.446 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:16.446 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:16.446 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.446 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:16.446 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.446 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.446 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:16.446 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:16.447 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:16.447 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.447 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:16.447 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.447 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.447 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:16.447 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:16.447 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.447 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:16.447 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.447 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.447 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:16.447 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:16.447 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.447 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:16.447 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.447 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.447 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:16.447 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:16.447 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.447 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:16.447 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.447 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.447 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:16.447 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:16.447 EAL: Hugepages will be freed exactly as allocated. 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: TSC frequency is ~2100000 KHz 00:04:16.447 EAL: Main lcore 0 is ready (tid=7f38021c8a00;cpuset=[0]) 00:04:16.447 EAL: Trying to obtain current memory policy. 00:04:16.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.447 EAL: Restoring previous memory policy: 0 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was expanded by 2MB 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:16.447 EAL: Mem event callback 'spdk:(nil)' registered 00:04:16.447 00:04:16.447 00:04:16.447 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.447 http://cunit.sourceforge.net/ 00:04:16.447 00:04:16.447 00:04:16.447 Suite: components_suite 00:04:16.447 Test: vtophys_malloc_test ...passed 00:04:16.447 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:16.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.447 EAL: Restoring previous memory policy: 4 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was expanded by 4MB 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was shrunk by 4MB 00:04:16.447 EAL: Trying to obtain current memory policy. 00:04:16.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.447 EAL: Restoring previous memory policy: 4 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was expanded by 6MB 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was shrunk by 6MB 00:04:16.447 EAL: Trying to obtain current memory policy. 00:04:16.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.447 EAL: Restoring previous memory policy: 4 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was expanded by 10MB 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was shrunk by 10MB 00:04:16.447 EAL: Trying to obtain current memory policy. 00:04:16.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.447 EAL: Restoring previous memory policy: 4 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was expanded by 18MB 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was shrunk by 18MB 00:04:16.447 EAL: Trying to obtain current memory policy. 00:04:16.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.447 EAL: Restoring previous memory policy: 4 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was expanded by 34MB 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was shrunk by 34MB 00:04:16.447 EAL: Trying to obtain current memory policy. 00:04:16.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.447 EAL: Restoring previous memory policy: 4 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was expanded by 66MB 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was shrunk by 66MB 00:04:16.447 EAL: Trying to obtain current memory policy. 00:04:16.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.447 EAL: Restoring previous memory policy: 4 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was expanded by 130MB 00:04:16.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.447 EAL: request: mp_malloc_sync 00:04:16.447 EAL: No shared files mode enabled, IPC is disabled 00:04:16.447 EAL: Heap on socket 0 was shrunk by 130MB 00:04:16.447 EAL: Trying to obtain current memory policy. 00:04:16.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.706 EAL: Restoring previous memory policy: 4 00:04:16.706 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.706 EAL: request: mp_malloc_sync 00:04:16.706 EAL: No shared files mode enabled, IPC is disabled 00:04:16.706 EAL: Heap on socket 0 was expanded by 258MB 00:04:16.706 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.706 EAL: request: mp_malloc_sync 00:04:16.706 EAL: No shared files mode enabled, IPC is disabled 00:04:16.706 EAL: Heap on socket 0 was shrunk by 258MB 00:04:16.706 EAL: Trying to obtain current memory policy. 00:04:16.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.706 EAL: Restoring previous memory policy: 4 00:04:16.706 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.706 EAL: request: mp_malloc_sync 00:04:16.706 EAL: No shared files mode enabled, IPC is disabled 00:04:16.706 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.706 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.966 EAL: request: mp_malloc_sync 00:04:16.966 EAL: No shared files mode enabled, IPC is disabled 00:04:16.966 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.966 EAL: Trying to obtain current memory policy. 00:04:16.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.224 EAL: Restoring previous memory policy: 4 00:04:17.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.224 EAL: request: mp_malloc_sync 00:04:17.224 EAL: No shared files mode enabled, IPC is disabled 00:04:17.224 EAL: Heap on socket 0 was expanded by 1026MB 00:04:17.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.484 EAL: request: mp_malloc_sync 00:04:17.484 EAL: No shared files mode enabled, IPC is disabled 00:04:17.484 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:17.484 passed 00:04:17.484 00:04:17.484 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.484 suites 1 1 n/a 0 0 00:04:17.484 tests 2 2 2 0 0 00:04:17.484 asserts 497 497 497 0 n/a 00:04:17.484 00:04:17.484 Elapsed time = 0.959 seconds 00:04:17.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.484 EAL: request: mp_malloc_sync 00:04:17.484 EAL: No shared files mode enabled, IPC is disabled 00:04:17.484 EAL: Heap on socket 0 was shrunk by 2MB 00:04:17.484 EAL: No shared files mode enabled, IPC is disabled 00:04:17.484 EAL: No shared files mode enabled, IPC is disabled 00:04:17.484 EAL: No shared files mode enabled, IPC is disabled 00:04:17.484 00:04:17.484 real 0m1.095s 00:04:17.484 user 0m0.637s 00:04:17.484 sys 0m0.429s 00:04:17.484 03:51:16 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.484 03:51:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:17.484 ************************************ 00:04:17.484 END TEST env_vtophys 00:04:17.484 ************************************ 00:04:17.484 03:51:16 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:17.484 03:51:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.484 03:51:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.484 03:51:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.484 ************************************ 00:04:17.484 START TEST env_pci 00:04:17.484 ************************************ 00:04:17.484 03:51:16 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:17.484 00:04:17.484 00:04:17.484 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.484 http://cunit.sourceforge.net/ 00:04:17.484 00:04:17.484 00:04:17.484 Suite: pci 00:04:17.484 Test: pci_hook ...[2024-12-10 03:51:16.658470] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4057554 has claimed it 00:04:17.484 EAL: Cannot find device (10000:00:01.0) 00:04:17.484 EAL: Failed to attach device on primary process 00:04:17.484 passed 00:04:17.484 00:04:17.484 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.484 suites 1 1 n/a 0 0 00:04:17.484 tests 1 1 1 0 0 00:04:17.484 asserts 25 25 25 0 n/a 00:04:17.484 00:04:17.484 Elapsed time = 0.028 seconds 00:04:17.484 00:04:17.484 real 0m0.047s 00:04:17.484 user 0m0.011s 00:04:17.484 sys 0m0.036s 00:04:17.484 03:51:16 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.484 03:51:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:17.484 ************************************ 00:04:17.484 END TEST env_pci 00:04:17.484 ************************************ 00:04:17.484 03:51:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:17.484 03:51:16 env -- env/env.sh@15 -- # uname 00:04:17.484 03:51:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:17.484 03:51:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:17.484 03:51:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.484 03:51:16 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:17.484 03:51:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.484 03:51:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.484 ************************************ 00:04:17.484 START TEST env_dpdk_post_init 00:04:17.484 ************************************ 00:04:17.484 03:51:16 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.743 EAL: Detected CPU lcores: 96 00:04:17.743 EAL: Detected NUMA nodes: 2 00:04:17.743 EAL: Detected shared linkage of DPDK 00:04:17.743 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.743 EAL: Selected IOVA mode 'VA' 00:04:17.743 EAL: VFIO support initialized 00:04:17.743 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.743 EAL: Using IOMMU type 1 (Type 1) 00:04:17.743 EAL: Ignore mapping IO port bar(1) 00:04:17.743 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:17.743 EAL: Ignore mapping IO port bar(1) 00:04:17.743 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:17.743 EAL: Ignore mapping IO port bar(1) 00:04:17.743 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:17.743 EAL: Ignore mapping IO port bar(1) 00:04:17.743 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:17.743 EAL: Ignore mapping IO port bar(1) 00:04:17.743 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:17.743 EAL: Ignore mapping IO port bar(1) 00:04:17.743 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:17.743 EAL: Ignore mapping IO port bar(1) 00:04:17.743 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:17.743 EAL: Ignore mapping IO port bar(1) 00:04:17.743 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:18.681 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:18.681 EAL: Ignore mapping IO port bar(1) 00:04:18.681 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:18.681 EAL: Ignore mapping IO port bar(1) 00:04:18.681 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:18.681 EAL: Ignore mapping IO port bar(1) 00:04:18.681 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:18.681 EAL: Ignore mapping IO port bar(1) 00:04:18.681 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:18.681 EAL: Ignore mapping IO port bar(1) 00:04:18.681 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:18.681 EAL: Ignore mapping IO port bar(1) 00:04:18.681 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:18.681 EAL: Ignore mapping IO port bar(1) 00:04:18.681 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:18.681 EAL: Ignore mapping IO port bar(1) 00:04:18.681 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:21.967 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:21.967 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:21.967 Starting DPDK initialization... 00:04:21.967 Starting SPDK post initialization... 00:04:21.967 SPDK NVMe probe 00:04:21.967 Attaching to 0000:5e:00.0 00:04:21.967 Attached to 0000:5e:00.0 00:04:21.967 Cleaning up... 00:04:21.967 00:04:21.967 real 0m4.355s 00:04:21.967 user 0m2.968s 00:04:21.967 sys 0m0.460s 00:04:21.967 03:51:21 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.967 03:51:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.967 ************************************ 00:04:21.967 END TEST env_dpdk_post_init 00:04:21.967 ************************************ 00:04:21.967 03:51:21 env -- env/env.sh@26 -- # uname 00:04:21.967 03:51:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:21.967 03:51:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.967 03:51:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.967 03:51:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.967 03:51:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.967 ************************************ 00:04:21.967 START TEST env_mem_callbacks 00:04:21.967 ************************************ 00:04:21.967 03:51:21 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.967 EAL: Detected CPU lcores: 96 00:04:21.967 EAL: Detected NUMA nodes: 2 00:04:21.967 EAL: Detected shared linkage of DPDK 00:04:21.967 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.967 EAL: Selected IOVA mode 'VA' 00:04:21.967 EAL: VFIO support initialized 00:04:21.967 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.967 00:04:21.967 00:04:21.967 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.967 http://cunit.sourceforge.net/ 00:04:21.967 00:04:21.967 00:04:21.967 Suite: memory 00:04:21.967 Test: test ... 00:04:21.967 register 0x200000200000 2097152 00:04:21.967 malloc 3145728 00:04:21.967 register 0x200000400000 4194304 00:04:21.967 buf 0x200000500000 len 3145728 PASSED 00:04:21.967 malloc 64 00:04:21.967 buf 0x2000004fff40 len 64 PASSED 00:04:21.967 malloc 4194304 00:04:21.967 register 0x200000800000 6291456 00:04:21.967 buf 0x200000a00000 len 4194304 PASSED 00:04:21.967 free 0x200000500000 3145728 00:04:21.967 free 0x2000004fff40 64 00:04:21.967 unregister 0x200000400000 4194304 PASSED 00:04:21.967 free 0x200000a00000 4194304 00:04:21.967 unregister 0x200000800000 6291456 PASSED 00:04:21.967 malloc 8388608 00:04:21.967 register 0x200000400000 10485760 00:04:21.967 buf 0x200000600000 len 8388608 PASSED 00:04:21.967 free 0x200000600000 8388608 00:04:21.967 unregister 0x200000400000 10485760 PASSED 00:04:21.967 passed 00:04:21.967 00:04:21.967 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.967 suites 1 1 n/a 0 0 00:04:21.967 tests 1 1 1 0 0 00:04:21.967 asserts 15 15 15 0 n/a 00:04:21.967 00:04:21.967 Elapsed time = 0.008 seconds 00:04:21.967 00:04:21.967 real 0m0.057s 00:04:21.967 user 0m0.021s 00:04:21.967 sys 0m0.037s 00:04:22.225 03:51:21 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.225 03:51:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:22.225 ************************************ 00:04:22.225 END TEST env_mem_callbacks 00:04:22.225 ************************************ 00:04:22.225 00:04:22.225 real 0m6.224s 00:04:22.225 user 0m4.017s 00:04:22.225 sys 0m1.284s 00:04:22.225 03:51:21 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.225 03:51:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.225 ************************************ 00:04:22.225 END TEST env 00:04:22.226 ************************************ 00:04:22.226 03:51:21 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:22.226 03:51:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.226 03:51:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.226 03:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:22.226 ************************************ 00:04:22.226 START TEST rpc 00:04:22.226 ************************************ 00:04:22.226 03:51:21 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:22.226 * Looking for test storage... 00:04:22.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:22.226 03:51:21 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:22.226 03:51:21 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:22.226 03:51:21 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:22.484 03:51:21 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:22.484 03:51:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.484 03:51:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.484 03:51:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.484 03:51:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.484 03:51:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.484 03:51:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.484 03:51:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.484 03:51:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.484 03:51:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.484 03:51:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.484 03:51:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.484 03:51:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:22.484 03:51:21 rpc -- scripts/common.sh@345 -- # : 1 00:04:22.484 03:51:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.484 03:51:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.484 03:51:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:22.484 03:51:21 rpc -- scripts/common.sh@353 -- # local d=1 00:04:22.484 03:51:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.484 03:51:21 rpc -- scripts/common.sh@355 -- # echo 1 00:04:22.484 03:51:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.484 03:51:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:22.484 03:51:21 rpc -- scripts/common.sh@353 -- # local d=2 00:04:22.484 03:51:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.484 03:51:21 rpc -- scripts/common.sh@355 -- # echo 2 00:04:22.484 03:51:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.484 03:51:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.484 03:51:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.484 03:51:21 rpc -- scripts/common.sh@368 -- # return 0 00:04:22.484 03:51:21 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.484 03:51:21 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:22.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.484 --rc genhtml_branch_coverage=1 00:04:22.484 --rc genhtml_function_coverage=1 00:04:22.484 --rc genhtml_legend=1 00:04:22.484 --rc geninfo_all_blocks=1 00:04:22.484 --rc geninfo_unexecuted_blocks=1 00:04:22.484 00:04:22.484 ' 00:04:22.484 03:51:21 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:22.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.484 --rc genhtml_branch_coverage=1 00:04:22.484 --rc genhtml_function_coverage=1 00:04:22.484 --rc genhtml_legend=1 00:04:22.484 --rc geninfo_all_blocks=1 00:04:22.484 --rc geninfo_unexecuted_blocks=1 00:04:22.484 00:04:22.484 ' 00:04:22.484 03:51:21 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:22.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.485 --rc genhtml_branch_coverage=1 00:04:22.485 --rc genhtml_function_coverage=1 00:04:22.485 --rc genhtml_legend=1 00:04:22.485 --rc geninfo_all_blocks=1 00:04:22.485 --rc geninfo_unexecuted_blocks=1 00:04:22.485 00:04:22.485 ' 00:04:22.485 03:51:21 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:22.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.485 --rc genhtml_branch_coverage=1 00:04:22.485 --rc genhtml_function_coverage=1 00:04:22.485 --rc genhtml_legend=1 00:04:22.485 --rc geninfo_all_blocks=1 00:04:22.485 --rc geninfo_unexecuted_blocks=1 00:04:22.485 00:04:22.485 ' 00:04:22.485 03:51:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4058369 00:04:22.485 03:51:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.485 03:51:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:22.485 03:51:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4058369 00:04:22.485 03:51:21 rpc -- common/autotest_common.sh@835 -- # '[' -z 4058369 ']' 00:04:22.485 03:51:21 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.485 03:51:21 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.485 03:51:21 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.485 03:51:21 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.485 03:51:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.485 [2024-12-10 03:51:21.588717] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:22.485 [2024-12-10 03:51:21.588762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058369 ] 00:04:22.485 [2024-12-10 03:51:21.663226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.485 [2024-12-10 03:51:21.703553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:22.485 [2024-12-10 03:51:21.703589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4058369' to capture a snapshot of events at runtime. 00:04:22.485 [2024-12-10 03:51:21.703596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:22.485 [2024-12-10 03:51:21.703602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:22.485 [2024-12-10 03:51:21.703607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4058369 for offline analysis/debug. 00:04:22.485 [2024-12-10 03:51:21.704122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.744 03:51:21 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.744 03:51:21 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:22.744 03:51:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:22.744 03:51:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:22.744 03:51:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:22.744 03:51:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:22.744 03:51:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.744 03:51:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.744 03:51:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.744 ************************************ 00:04:22.744 START TEST rpc_integrity 00:04:22.744 ************************************ 00:04:22.744 03:51:21 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:22.744 03:51:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:22.744 03:51:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.744 03:51:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.744 03:51:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.744 03:51:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:22.744 03:51:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:22.744 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:22.744 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:22.744 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.744 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.003 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.003 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:23.003 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.003 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.003 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.003 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.003 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.003 { 00:04:23.003 "name": "Malloc0", 00:04:23.003 "aliases": [ 00:04:23.003 "4eb551fb-9a17-4a29-8974-5e69aa3ab685" 00:04:23.003 ], 00:04:23.003 "product_name": "Malloc disk", 00:04:23.003 "block_size": 512, 00:04:23.003 "num_blocks": 16384, 00:04:23.003 "uuid": "4eb551fb-9a17-4a29-8974-5e69aa3ab685", 00:04:23.003 "assigned_rate_limits": { 00:04:23.003 "rw_ios_per_sec": 0, 00:04:23.003 "rw_mbytes_per_sec": 0, 00:04:23.003 "r_mbytes_per_sec": 0, 00:04:23.003 "w_mbytes_per_sec": 0 00:04:23.003 }, 00:04:23.003 "claimed": false, 00:04:23.003 "zoned": false, 00:04:23.003 "supported_io_types": { 00:04:23.003 "read": true, 00:04:23.003 "write": true, 00:04:23.003 "unmap": true, 00:04:23.003 "flush": true, 00:04:23.003 "reset": true, 00:04:23.003 "nvme_admin": false, 00:04:23.003 "nvme_io": false, 00:04:23.003 "nvme_io_md": false, 00:04:23.003 "write_zeroes": true, 00:04:23.003 "zcopy": true, 00:04:23.003 "get_zone_info": false, 00:04:23.003 "zone_management": false, 00:04:23.003 "zone_append": false, 00:04:23.003 "compare": false, 00:04:23.003 "compare_and_write": false, 00:04:23.003 "abort": true, 00:04:23.003 "seek_hole": false, 00:04:23.003 "seek_data": false, 00:04:23.003 "copy": true, 00:04:23.003 "nvme_iov_md": false 00:04:23.003 }, 00:04:23.003 "memory_domains": [ 00:04:23.003 { 00:04:23.003 "dma_device_id": "system", 00:04:23.003 "dma_device_type": 1 00:04:23.003 }, 00:04:23.003 { 00:04:23.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.003 "dma_device_type": 2 00:04:23.003 } 00:04:23.003 ], 00:04:23.003 "driver_specific": {} 00:04:23.003 } 00:04:23.003 ]' 00:04:23.003 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.003 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.003 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:23.003 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.003 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.003 [2024-12-10 03:51:22.097421] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:23.003 [2024-12-10 03:51:22.097447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.003 [2024-12-10 03:51:22.097459] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25e8740 00:04:23.003 [2024-12-10 03:51:22.097465] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.003 [2024-12-10 03:51:22.098533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.003 [2024-12-10 03:51:22.098553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.003 Passthru0 00:04:23.003 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.003 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.003 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.003 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.003 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.003 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.003 { 00:04:23.003 "name": "Malloc0", 00:04:23.003 "aliases": [ 00:04:23.003 "4eb551fb-9a17-4a29-8974-5e69aa3ab685" 00:04:23.003 ], 00:04:23.003 "product_name": "Malloc disk", 00:04:23.003 "block_size": 512, 00:04:23.004 "num_blocks": 16384, 00:04:23.004 "uuid": "4eb551fb-9a17-4a29-8974-5e69aa3ab685", 00:04:23.004 "assigned_rate_limits": { 00:04:23.004 "rw_ios_per_sec": 0, 00:04:23.004 "rw_mbytes_per_sec": 0, 00:04:23.004 "r_mbytes_per_sec": 0, 00:04:23.004 "w_mbytes_per_sec": 0 00:04:23.004 }, 00:04:23.004 "claimed": true, 00:04:23.004 "claim_type": "exclusive_write", 00:04:23.004 "zoned": false, 00:04:23.004 "supported_io_types": { 00:04:23.004 "read": true, 00:04:23.004 "write": true, 00:04:23.004 "unmap": true, 00:04:23.004 "flush": true, 00:04:23.004 "reset": true, 00:04:23.004 "nvme_admin": false, 00:04:23.004 "nvme_io": false, 00:04:23.004 "nvme_io_md": false, 00:04:23.004 "write_zeroes": true, 00:04:23.004 "zcopy": true, 00:04:23.004 "get_zone_info": false, 00:04:23.004 "zone_management": false, 00:04:23.004 "zone_append": false, 00:04:23.004 "compare": false, 00:04:23.004 "compare_and_write": false, 00:04:23.004 "abort": true, 00:04:23.004 "seek_hole": false, 00:04:23.004 "seek_data": false, 00:04:23.004 "copy": true, 00:04:23.004 "nvme_iov_md": false 00:04:23.004 }, 00:04:23.004 "memory_domains": [ 00:04:23.004 { 00:04:23.004 "dma_device_id": "system", 00:04:23.004 "dma_device_type": 1 00:04:23.004 }, 00:04:23.004 { 00:04:23.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.004 "dma_device_type": 2 00:04:23.004 } 00:04:23.004 ], 00:04:23.004 "driver_specific": {} 00:04:23.004 }, 00:04:23.004 { 00:04:23.004 "name": "Passthru0", 00:04:23.004 "aliases": [ 00:04:23.004 "705aa410-973c-55ec-9ad4-a19bcecb0e2c" 00:04:23.004 ], 00:04:23.004 "product_name": "passthru", 00:04:23.004 "block_size": 512, 00:04:23.004 "num_blocks": 16384, 00:04:23.004 "uuid": "705aa410-973c-55ec-9ad4-a19bcecb0e2c", 00:04:23.004 "assigned_rate_limits": { 00:04:23.004 "rw_ios_per_sec": 0, 00:04:23.004 "rw_mbytes_per_sec": 0, 00:04:23.004 "r_mbytes_per_sec": 0, 00:04:23.004 "w_mbytes_per_sec": 0 00:04:23.004 }, 00:04:23.004 "claimed": false, 00:04:23.004 "zoned": false, 00:04:23.004 "supported_io_types": { 00:04:23.004 "read": true, 00:04:23.004 "write": true, 00:04:23.004 "unmap": true, 00:04:23.004 "flush": true, 00:04:23.004 "reset": true, 00:04:23.004 "nvme_admin": false, 00:04:23.004 "nvme_io": false, 00:04:23.004 "nvme_io_md": false, 00:04:23.004 "write_zeroes": true, 00:04:23.004 "zcopy": true, 00:04:23.004 "get_zone_info": false, 00:04:23.004 "zone_management": false, 00:04:23.004 "zone_append": false, 00:04:23.004 "compare": false, 00:04:23.004 "compare_and_write": false, 00:04:23.004 "abort": true, 00:04:23.004 "seek_hole": false, 00:04:23.004 "seek_data": false, 00:04:23.004 "copy": true, 00:04:23.004 "nvme_iov_md": false 00:04:23.004 }, 00:04:23.004 "memory_domains": [ 00:04:23.004 { 00:04:23.004 "dma_device_id": "system", 00:04:23.004 "dma_device_type": 1 00:04:23.004 }, 00:04:23.004 { 00:04:23.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.004 "dma_device_type": 2 00:04:23.004 } 00:04:23.004 ], 00:04:23.004 "driver_specific": { 00:04:23.004 "passthru": { 00:04:23.004 "name": "Passthru0", 00:04:23.004 "base_bdev_name": "Malloc0" 00:04:23.004 } 00:04:23.004 } 00:04:23.004 } 00:04:23.004 ]' 00:04:23.004 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.004 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.004 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.004 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.004 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.004 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.004 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:23.004 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.004 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.004 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.004 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.004 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.004 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.004 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.004 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.004 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.004 03:51:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.004 00:04:23.004 real 0m0.272s 00:04:23.004 user 0m0.184s 00:04:23.004 sys 0m0.027s 00:04:23.004 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.004 03:51:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.004 ************************************ 00:04:23.004 END TEST rpc_integrity 00:04:23.004 ************************************ 00:04:23.004 03:51:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:23.004 03:51:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.004 03:51:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.004 03:51:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.263 ************************************ 00:04:23.263 START TEST rpc_plugins 00:04:23.263 ************************************ 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:23.263 03:51:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.263 03:51:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:23.263 03:51:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.263 03:51:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:23.263 { 00:04:23.263 "name": "Malloc1", 00:04:23.263 "aliases": [ 00:04:23.263 "5e91b14d-b339-4b23-aea7-8c5bb5044bbe" 00:04:23.263 ], 00:04:23.263 "product_name": "Malloc disk", 00:04:23.263 "block_size": 4096, 00:04:23.263 "num_blocks": 256, 00:04:23.263 "uuid": "5e91b14d-b339-4b23-aea7-8c5bb5044bbe", 00:04:23.263 "assigned_rate_limits": { 00:04:23.263 "rw_ios_per_sec": 0, 00:04:23.263 "rw_mbytes_per_sec": 0, 00:04:23.263 "r_mbytes_per_sec": 0, 00:04:23.263 "w_mbytes_per_sec": 0 00:04:23.263 }, 00:04:23.263 "claimed": false, 00:04:23.263 "zoned": false, 00:04:23.263 "supported_io_types": { 00:04:23.263 "read": true, 00:04:23.263 "write": true, 00:04:23.263 "unmap": true, 00:04:23.263 "flush": true, 00:04:23.263 "reset": true, 00:04:23.263 "nvme_admin": false, 00:04:23.263 "nvme_io": false, 00:04:23.263 "nvme_io_md": false, 00:04:23.263 "write_zeroes": true, 00:04:23.263 "zcopy": true, 00:04:23.263 "get_zone_info": false, 00:04:23.263 "zone_management": false, 00:04:23.263 "zone_append": false, 00:04:23.263 "compare": false, 00:04:23.263 "compare_and_write": false, 00:04:23.263 "abort": true, 00:04:23.263 "seek_hole": false, 00:04:23.263 "seek_data": false, 00:04:23.263 "copy": true, 00:04:23.263 "nvme_iov_md": false 00:04:23.263 }, 00:04:23.263 "memory_domains": [ 00:04:23.263 { 00:04:23.263 "dma_device_id": "system", 00:04:23.263 "dma_device_type": 1 00:04:23.263 }, 00:04:23.263 { 00:04:23.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.263 "dma_device_type": 2 00:04:23.263 } 00:04:23.263 ], 00:04:23.263 "driver_specific": {} 00:04:23.263 } 00:04:23.263 ]' 00:04:23.263 03:51:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:23.263 03:51:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:23.263 03:51:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.263 03:51:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.263 03:51:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:23.263 03:51:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:23.263 03:51:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:23.263 00:04:23.263 real 0m0.145s 00:04:23.263 user 0m0.084s 00:04:23.263 sys 0m0.024s 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.263 03:51:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.263 ************************************ 00:04:23.263 END TEST rpc_plugins 00:04:23.263 ************************************ 00:04:23.263 03:51:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:23.263 03:51:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.264 03:51:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.264 03:51:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.264 ************************************ 00:04:23.264 START TEST rpc_trace_cmd_test 00:04:23.264 ************************************ 00:04:23.264 03:51:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:23.264 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:23.264 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:23.264 03:51:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.264 03:51:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.264 03:51:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.264 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:23.264 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4058369", 00:04:23.264 "tpoint_group_mask": "0x8", 00:04:23.264 "iscsi_conn": { 00:04:23.264 "mask": "0x2", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "scsi": { 00:04:23.264 "mask": "0x4", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "bdev": { 00:04:23.264 "mask": "0x8", 00:04:23.264 "tpoint_mask": "0xffffffffffffffff" 00:04:23.264 }, 00:04:23.264 "nvmf_rdma": { 00:04:23.264 "mask": "0x10", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "nvmf_tcp": { 00:04:23.264 "mask": "0x20", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "ftl": { 00:04:23.264 "mask": "0x40", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "blobfs": { 00:04:23.264 "mask": "0x80", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "dsa": { 00:04:23.264 "mask": "0x200", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "thread": { 00:04:23.264 "mask": "0x400", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "nvme_pcie": { 00:04:23.264 "mask": "0x800", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "iaa": { 00:04:23.264 "mask": "0x1000", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "nvme_tcp": { 00:04:23.264 "mask": "0x2000", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "bdev_nvme": { 00:04:23.264 "mask": "0x4000", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "sock": { 00:04:23.264 "mask": "0x8000", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "blob": { 00:04:23.264 "mask": "0x10000", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "bdev_raid": { 00:04:23.264 "mask": "0x20000", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 }, 00:04:23.264 "scheduler": { 00:04:23.264 "mask": "0x40000", 00:04:23.264 "tpoint_mask": "0x0" 00:04:23.264 } 00:04:23.264 }' 00:04:23.264 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:23.543 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:23.543 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:23.543 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:23.543 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:23.543 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:23.543 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:23.543 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:23.543 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:23.543 03:51:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:23.543 00:04:23.543 real 0m0.210s 00:04:23.543 user 0m0.182s 00:04:23.543 sys 0m0.022s 00:04:23.543 03:51:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.543 03:51:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.543 ************************************ 00:04:23.543 END TEST rpc_trace_cmd_test 00:04:23.543 ************************************ 00:04:23.543 03:51:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:23.543 03:51:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:23.543 03:51:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:23.543 03:51:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.543 03:51:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.543 03:51:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.543 ************************************ 00:04:23.543 START TEST rpc_daemon_integrity 00:04:23.543 ************************************ 00:04:23.543 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:23.543 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.543 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.543 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.543 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.543 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.543 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.885 { 00:04:23.885 "name": "Malloc2", 00:04:23.885 "aliases": [ 00:04:23.885 "9f69c27f-14b6-4460-90bb-2821cba27855" 00:04:23.885 ], 00:04:23.885 "product_name": "Malloc disk", 00:04:23.885 "block_size": 512, 00:04:23.885 "num_blocks": 16384, 00:04:23.885 "uuid": "9f69c27f-14b6-4460-90bb-2821cba27855", 00:04:23.885 "assigned_rate_limits": { 00:04:23.885 "rw_ios_per_sec": 0, 00:04:23.885 "rw_mbytes_per_sec": 0, 00:04:23.885 "r_mbytes_per_sec": 0, 00:04:23.885 "w_mbytes_per_sec": 0 00:04:23.885 }, 00:04:23.885 "claimed": false, 00:04:23.885 "zoned": false, 00:04:23.885 "supported_io_types": { 00:04:23.885 "read": true, 00:04:23.885 "write": true, 00:04:23.885 "unmap": true, 00:04:23.885 "flush": true, 00:04:23.885 "reset": true, 00:04:23.885 "nvme_admin": false, 00:04:23.885 "nvme_io": false, 00:04:23.885 "nvme_io_md": false, 00:04:23.885 "write_zeroes": true, 00:04:23.885 "zcopy": true, 00:04:23.885 "get_zone_info": false, 00:04:23.885 "zone_management": false, 00:04:23.885 "zone_append": false, 00:04:23.885 "compare": false, 00:04:23.885 "compare_and_write": false, 00:04:23.885 "abort": true, 00:04:23.885 "seek_hole": false, 00:04:23.885 "seek_data": false, 00:04:23.885 "copy": true, 00:04:23.885 "nvme_iov_md": false 00:04:23.885 }, 00:04:23.885 "memory_domains": [ 00:04:23.885 { 00:04:23.885 "dma_device_id": "system", 00:04:23.885 "dma_device_type": 1 00:04:23.885 }, 00:04:23.885 { 00:04:23.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.885 "dma_device_type": 2 00:04:23.885 } 00:04:23.885 ], 00:04:23.885 "driver_specific": {} 00:04:23.885 } 00:04:23.885 ]' 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.885 [2024-12-10 03:51:22.927656] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:23.885 [2024-12-10 03:51:22.927682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.885 [2024-12-10 03:51:22.927693] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b5fe0 00:04:23.885 [2024-12-10 03:51:22.927699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.885 [2024-12-10 03:51:22.928646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.885 [2024-12-10 03:51:22.928667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.885 Passthru0 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.885 { 00:04:23.885 "name": "Malloc2", 00:04:23.885 "aliases": [ 00:04:23.885 "9f69c27f-14b6-4460-90bb-2821cba27855" 00:04:23.885 ], 00:04:23.885 "product_name": "Malloc disk", 00:04:23.885 "block_size": 512, 00:04:23.885 "num_blocks": 16384, 00:04:23.885 "uuid": "9f69c27f-14b6-4460-90bb-2821cba27855", 00:04:23.885 "assigned_rate_limits": { 00:04:23.885 "rw_ios_per_sec": 0, 00:04:23.885 "rw_mbytes_per_sec": 0, 00:04:23.885 "r_mbytes_per_sec": 0, 00:04:23.885 "w_mbytes_per_sec": 0 00:04:23.885 }, 00:04:23.885 "claimed": true, 00:04:23.885 "claim_type": "exclusive_write", 00:04:23.885 "zoned": false, 00:04:23.885 "supported_io_types": { 00:04:23.885 "read": true, 00:04:23.885 "write": true, 00:04:23.885 "unmap": true, 00:04:23.885 "flush": true, 00:04:23.885 "reset": true, 00:04:23.885 "nvme_admin": false, 00:04:23.885 "nvme_io": false, 00:04:23.885 "nvme_io_md": false, 00:04:23.885 "write_zeroes": true, 00:04:23.885 "zcopy": true, 00:04:23.885 "get_zone_info": false, 00:04:23.885 "zone_management": false, 00:04:23.885 "zone_append": false, 00:04:23.885 "compare": false, 00:04:23.885 "compare_and_write": false, 00:04:23.885 "abort": true, 00:04:23.885 "seek_hole": false, 00:04:23.885 "seek_data": false, 00:04:23.885 "copy": true, 00:04:23.885 "nvme_iov_md": false 00:04:23.885 }, 00:04:23.885 "memory_domains": [ 00:04:23.885 { 00:04:23.885 "dma_device_id": "system", 00:04:23.885 "dma_device_type": 1 00:04:23.885 }, 00:04:23.885 { 00:04:23.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.885 "dma_device_type": 2 00:04:23.885 } 00:04:23.885 ], 00:04:23.885 "driver_specific": {} 00:04:23.885 }, 00:04:23.885 { 00:04:23.885 "name": "Passthru0", 00:04:23.885 "aliases": [ 00:04:23.885 "13943ac9-f2d0-592e-9d82-481855665924" 00:04:23.885 ], 00:04:23.885 "product_name": "passthru", 00:04:23.885 "block_size": 512, 00:04:23.885 "num_blocks": 16384, 00:04:23.885 "uuid": "13943ac9-f2d0-592e-9d82-481855665924", 00:04:23.885 "assigned_rate_limits": { 00:04:23.885 "rw_ios_per_sec": 0, 00:04:23.885 "rw_mbytes_per_sec": 0, 00:04:23.885 "r_mbytes_per_sec": 0, 00:04:23.885 "w_mbytes_per_sec": 0 00:04:23.885 }, 00:04:23.885 "claimed": false, 00:04:23.885 "zoned": false, 00:04:23.885 "supported_io_types": { 00:04:23.885 "read": true, 00:04:23.885 "write": true, 00:04:23.885 "unmap": true, 00:04:23.885 "flush": true, 00:04:23.885 "reset": true, 00:04:23.885 "nvme_admin": false, 00:04:23.885 "nvme_io": false, 00:04:23.885 "nvme_io_md": false, 00:04:23.885 "write_zeroes": true, 00:04:23.885 "zcopy": true, 00:04:23.885 "get_zone_info": false, 00:04:23.885 "zone_management": false, 00:04:23.885 "zone_append": false, 00:04:23.885 "compare": false, 00:04:23.885 "compare_and_write": false, 00:04:23.885 "abort": true, 00:04:23.885 "seek_hole": false, 00:04:23.885 "seek_data": false, 00:04:23.885 "copy": true, 00:04:23.885 "nvme_iov_md": false 00:04:23.885 }, 00:04:23.885 "memory_domains": [ 00:04:23.885 { 00:04:23.885 "dma_device_id": "system", 00:04:23.885 "dma_device_type": 1 00:04:23.885 }, 00:04:23.885 { 00:04:23.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.885 "dma_device_type": 2 00:04:23.885 } 00:04:23.885 ], 00:04:23.885 "driver_specific": { 00:04:23.885 "passthru": { 00:04:23.885 "name": "Passthru0", 00:04:23.885 "base_bdev_name": "Malloc2" 00:04:23.885 } 00:04:23.885 } 00:04:23.885 } 00:04:23.885 ]' 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.885 03:51:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.885 03:51:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.885 03:51:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:23.885 03:51:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.885 03:51:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.885 03:51:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.885 03:51:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.885 03:51:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.885 03:51:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.885 03:51:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.885 03:51:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.886 03:51:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.886 03:51:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.886 00:04:23.886 real 0m0.264s 00:04:23.886 user 0m0.171s 00:04:23.886 sys 0m0.032s 00:04:23.886 03:51:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.886 03:51:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.886 ************************************ 00:04:23.886 END TEST rpc_daemon_integrity 00:04:23.886 ************************************ 00:04:23.886 03:51:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:23.886 03:51:23 rpc -- rpc/rpc.sh@84 -- # killprocess 4058369 00:04:23.886 03:51:23 rpc -- common/autotest_common.sh@954 -- # '[' -z 4058369 ']' 00:04:23.886 03:51:23 rpc -- common/autotest_common.sh@958 -- # kill -0 4058369 00:04:23.886 03:51:23 rpc -- common/autotest_common.sh@959 -- # uname 00:04:23.886 03:51:23 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.886 03:51:23 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4058369 00:04:23.886 03:51:23 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.886 03:51:23 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.886 03:51:23 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4058369' 00:04:23.886 killing process with pid 4058369 00:04:23.886 03:51:23 rpc -- common/autotest_common.sh@973 -- # kill 4058369 00:04:23.886 03:51:23 rpc -- common/autotest_common.sh@978 -- # wait 4058369 00:04:24.454 00:04:24.454 real 0m2.085s 00:04:24.454 user 0m2.661s 00:04:24.454 sys 0m0.680s 00:04:24.454 03:51:23 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.454 03:51:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.454 ************************************ 00:04:24.454 END TEST rpc 00:04:24.454 ************************************ 00:04:24.454 03:51:23 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:24.454 03:51:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.454 03:51:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.454 03:51:23 -- common/autotest_common.sh@10 -- # set +x 00:04:24.454 ************************************ 00:04:24.454 START TEST skip_rpc 00:04:24.454 ************************************ 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:24.454 * Looking for test storage... 00:04:24.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.454 03:51:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.454 --rc genhtml_branch_coverage=1 00:04:24.454 --rc genhtml_function_coverage=1 00:04:24.454 --rc genhtml_legend=1 00:04:24.454 --rc geninfo_all_blocks=1 00:04:24.454 --rc geninfo_unexecuted_blocks=1 00:04:24.454 00:04:24.454 ' 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.454 --rc genhtml_branch_coverage=1 00:04:24.454 --rc genhtml_function_coverage=1 00:04:24.454 --rc genhtml_legend=1 00:04:24.454 --rc geninfo_all_blocks=1 00:04:24.454 --rc geninfo_unexecuted_blocks=1 00:04:24.454 00:04:24.454 ' 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.454 --rc genhtml_branch_coverage=1 00:04:24.454 --rc genhtml_function_coverage=1 00:04:24.454 --rc genhtml_legend=1 00:04:24.454 --rc geninfo_all_blocks=1 00:04:24.454 --rc geninfo_unexecuted_blocks=1 00:04:24.454 00:04:24.454 ' 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.454 --rc genhtml_branch_coverage=1 00:04:24.454 --rc genhtml_function_coverage=1 00:04:24.454 --rc genhtml_legend=1 00:04:24.454 --rc geninfo_all_blocks=1 00:04:24.454 --rc geninfo_unexecuted_blocks=1 00:04:24.454 00:04:24.454 ' 00:04:24.454 03:51:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:24.454 03:51:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:24.454 03:51:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.454 03:51:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.454 ************************************ 00:04:24.454 START TEST skip_rpc 00:04:24.454 ************************************ 00:04:24.454 03:51:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:24.454 03:51:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4058994 00:04:24.454 03:51:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:24.454 03:51:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.454 03:51:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:24.713 [2024-12-10 03:51:23.774664] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:24.713 [2024-12-10 03:51:23.774702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058994 ] 00:04:24.713 [2024-12-10 03:51:23.848672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.713 [2024-12-10 03:51:23.886741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4058994 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 4058994 ']' 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 4058994 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4058994 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4058994' 00:04:29.983 killing process with pid 4058994 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 4058994 00:04:29.983 03:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 4058994 00:04:29.983 00:04:29.983 real 0m5.359s 00:04:29.983 user 0m5.113s 00:04:29.983 sys 0m0.285s 00:04:29.983 03:51:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.983 03:51:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.983 ************************************ 00:04:29.983 END TEST skip_rpc 00:04:29.983 ************************************ 00:04:29.983 03:51:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:29.983 03:51:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.983 03:51:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.983 03:51:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.983 ************************************ 00:04:29.983 START TEST skip_rpc_with_json 00:04:29.983 ************************************ 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4059916 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4059916 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 4059916 ']' 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.983 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.983 [2024-12-10 03:51:29.206702] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:29.983 [2024-12-10 03:51:29.206745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059916 ] 00:04:30.242 [2024-12-10 03:51:29.279137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.242 [2024-12-10 03:51:29.315428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.502 [2024-12-10 03:51:29.539309] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:30.502 request: 00:04:30.502 { 00:04:30.502 "trtype": "tcp", 00:04:30.502 "method": "nvmf_get_transports", 00:04:30.502 "req_id": 1 00:04:30.502 } 00:04:30.502 Got JSON-RPC error response 00:04:30.502 response: 00:04:30.502 { 00:04:30.502 "code": -19, 00:04:30.502 "message": "No such device" 00:04:30.502 } 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.502 [2024-12-10 03:51:29.551413] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.502 03:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:30.502 { 00:04:30.502 "subsystems": [ 00:04:30.502 { 00:04:30.502 "subsystem": "fsdev", 00:04:30.502 "config": [ 00:04:30.502 { 00:04:30.502 "method": "fsdev_set_opts", 00:04:30.502 "params": { 00:04:30.502 "fsdev_io_pool_size": 65535, 00:04:30.502 "fsdev_io_cache_size": 256 00:04:30.502 } 00:04:30.502 } 00:04:30.502 ] 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "subsystem": "vfio_user_target", 00:04:30.502 "config": null 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "subsystem": "keyring", 00:04:30.502 "config": [] 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "subsystem": "iobuf", 00:04:30.502 "config": [ 00:04:30.502 { 00:04:30.502 "method": "iobuf_set_options", 00:04:30.502 "params": { 00:04:30.502 "small_pool_count": 8192, 00:04:30.502 "large_pool_count": 1024, 00:04:30.502 "small_bufsize": 8192, 00:04:30.502 "large_bufsize": 135168, 00:04:30.502 "enable_numa": false 00:04:30.502 } 00:04:30.502 } 00:04:30.502 ] 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "subsystem": "sock", 00:04:30.502 "config": [ 00:04:30.502 { 00:04:30.502 "method": "sock_set_default_impl", 00:04:30.502 "params": { 00:04:30.502 "impl_name": "posix" 00:04:30.502 } 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "method": "sock_impl_set_options", 00:04:30.502 "params": { 00:04:30.502 "impl_name": "ssl", 00:04:30.502 "recv_buf_size": 4096, 00:04:30.502 "send_buf_size": 4096, 00:04:30.502 "enable_recv_pipe": true, 00:04:30.502 "enable_quickack": false, 00:04:30.502 "enable_placement_id": 0, 00:04:30.502 "enable_zerocopy_send_server": true, 00:04:30.502 "enable_zerocopy_send_client": false, 00:04:30.502 "zerocopy_threshold": 0, 00:04:30.502 "tls_version": 0, 00:04:30.502 "enable_ktls": false 00:04:30.502 } 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "method": "sock_impl_set_options", 00:04:30.502 "params": { 00:04:30.502 "impl_name": "posix", 00:04:30.502 "recv_buf_size": 2097152, 00:04:30.502 "send_buf_size": 2097152, 00:04:30.502 "enable_recv_pipe": true, 00:04:30.502 "enable_quickack": false, 00:04:30.502 "enable_placement_id": 0, 00:04:30.502 "enable_zerocopy_send_server": true, 00:04:30.502 "enable_zerocopy_send_client": false, 00:04:30.502 "zerocopy_threshold": 0, 00:04:30.502 "tls_version": 0, 00:04:30.502 "enable_ktls": false 00:04:30.502 } 00:04:30.502 } 00:04:30.502 ] 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "subsystem": "vmd", 00:04:30.502 "config": [] 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "subsystem": "accel", 00:04:30.502 "config": [ 00:04:30.502 { 00:04:30.502 "method": "accel_set_options", 00:04:30.502 "params": { 00:04:30.502 "small_cache_size": 128, 00:04:30.502 "large_cache_size": 16, 00:04:30.502 "task_count": 2048, 00:04:30.502 "sequence_count": 2048, 00:04:30.502 "buf_count": 2048 00:04:30.502 } 00:04:30.502 } 00:04:30.502 ] 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "subsystem": "bdev", 00:04:30.502 "config": [ 00:04:30.502 { 00:04:30.502 "method": "bdev_set_options", 00:04:30.502 "params": { 00:04:30.502 "bdev_io_pool_size": 65535, 00:04:30.502 "bdev_io_cache_size": 256, 00:04:30.502 "bdev_auto_examine": true, 00:04:30.502 "iobuf_small_cache_size": 128, 00:04:30.502 "iobuf_large_cache_size": 16 00:04:30.502 } 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "method": "bdev_raid_set_options", 00:04:30.502 "params": { 00:04:30.502 "process_window_size_kb": 1024, 00:04:30.502 "process_max_bandwidth_mb_sec": 0 00:04:30.502 } 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "method": "bdev_iscsi_set_options", 00:04:30.502 "params": { 00:04:30.502 "timeout_sec": 30 00:04:30.502 } 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "method": "bdev_nvme_set_options", 00:04:30.502 "params": { 00:04:30.502 "action_on_timeout": "none", 00:04:30.502 "timeout_us": 0, 00:04:30.502 "timeout_admin_us": 0, 00:04:30.502 "keep_alive_timeout_ms": 10000, 00:04:30.502 "arbitration_burst": 0, 00:04:30.502 "low_priority_weight": 0, 00:04:30.502 "medium_priority_weight": 0, 00:04:30.502 "high_priority_weight": 0, 00:04:30.502 "nvme_adminq_poll_period_us": 10000, 00:04:30.502 "nvme_ioq_poll_period_us": 0, 00:04:30.502 "io_queue_requests": 0, 00:04:30.502 "delay_cmd_submit": true, 00:04:30.502 "transport_retry_count": 4, 00:04:30.502 "bdev_retry_count": 3, 00:04:30.502 "transport_ack_timeout": 0, 00:04:30.502 "ctrlr_loss_timeout_sec": 0, 00:04:30.502 "reconnect_delay_sec": 0, 00:04:30.502 "fast_io_fail_timeout_sec": 0, 00:04:30.502 "disable_auto_failback": false, 00:04:30.502 "generate_uuids": false, 00:04:30.502 "transport_tos": 0, 00:04:30.502 "nvme_error_stat": false, 00:04:30.502 "rdma_srq_size": 0, 00:04:30.502 "io_path_stat": false, 00:04:30.502 "allow_accel_sequence": false, 00:04:30.502 "rdma_max_cq_size": 0, 00:04:30.502 "rdma_cm_event_timeout_ms": 0, 00:04:30.502 "dhchap_digests": [ 00:04:30.502 "sha256", 00:04:30.502 "sha384", 00:04:30.502 "sha512" 00:04:30.502 ], 00:04:30.502 "dhchap_dhgroups": [ 00:04:30.502 "null", 00:04:30.502 "ffdhe2048", 00:04:30.502 "ffdhe3072", 00:04:30.502 "ffdhe4096", 00:04:30.502 "ffdhe6144", 00:04:30.502 "ffdhe8192" 00:04:30.502 ] 00:04:30.502 } 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "method": "bdev_nvme_set_hotplug", 00:04:30.502 "params": { 00:04:30.502 "period_us": 100000, 00:04:30.502 "enable": false 00:04:30.502 } 00:04:30.502 }, 00:04:30.502 { 00:04:30.502 "method": "bdev_wait_for_examine" 00:04:30.502 } 00:04:30.502 ] 00:04:30.502 }, 00:04:30.502 { 00:04:30.503 "subsystem": "scsi", 00:04:30.503 "config": null 00:04:30.503 }, 00:04:30.503 { 00:04:30.503 "subsystem": "scheduler", 00:04:30.503 "config": [ 00:04:30.503 { 00:04:30.503 "method": "framework_set_scheduler", 00:04:30.503 "params": { 00:04:30.503 "name": "static" 00:04:30.503 } 00:04:30.503 } 00:04:30.503 ] 00:04:30.503 }, 00:04:30.503 { 00:04:30.503 "subsystem": "vhost_scsi", 00:04:30.503 "config": [] 00:04:30.503 }, 00:04:30.503 { 00:04:30.503 "subsystem": "vhost_blk", 00:04:30.503 "config": [] 00:04:30.503 }, 00:04:30.503 { 00:04:30.503 "subsystem": "ublk", 00:04:30.503 "config": [] 00:04:30.503 }, 00:04:30.503 { 00:04:30.503 "subsystem": "nbd", 00:04:30.503 "config": [] 00:04:30.503 }, 00:04:30.503 { 00:04:30.503 "subsystem": "nvmf", 00:04:30.503 "config": [ 00:04:30.503 { 00:04:30.503 "method": "nvmf_set_config", 00:04:30.503 "params": { 00:04:30.503 "discovery_filter": "match_any", 00:04:30.503 "admin_cmd_passthru": { 00:04:30.503 "identify_ctrlr": false 00:04:30.503 }, 00:04:30.503 "dhchap_digests": [ 00:04:30.503 "sha256", 00:04:30.503 "sha384", 00:04:30.503 "sha512" 00:04:30.503 ], 00:04:30.503 "dhchap_dhgroups": [ 00:04:30.503 "null", 00:04:30.503 "ffdhe2048", 00:04:30.503 "ffdhe3072", 00:04:30.503 "ffdhe4096", 00:04:30.503 "ffdhe6144", 00:04:30.503 "ffdhe8192" 00:04:30.503 ] 00:04:30.503 } 00:04:30.503 }, 00:04:30.503 { 00:04:30.503 "method": "nvmf_set_max_subsystems", 00:04:30.503 "params": { 00:04:30.503 "max_subsystems": 1024 00:04:30.503 } 00:04:30.503 }, 00:04:30.503 { 00:04:30.503 "method": "nvmf_set_crdt", 00:04:30.503 "params": { 00:04:30.503 "crdt1": 0, 00:04:30.503 "crdt2": 0, 00:04:30.503 "crdt3": 0 00:04:30.503 } 00:04:30.503 }, 00:04:30.503 { 00:04:30.503 "method": "nvmf_create_transport", 00:04:30.503 "params": { 00:04:30.503 "trtype": "TCP", 00:04:30.503 "max_queue_depth": 128, 00:04:30.503 "max_io_qpairs_per_ctrlr": 127, 00:04:30.503 "in_capsule_data_size": 4096, 00:04:30.503 "max_io_size": 131072, 00:04:30.503 "io_unit_size": 131072, 00:04:30.503 "max_aq_depth": 128, 00:04:30.503 "num_shared_buffers": 511, 00:04:30.503 "buf_cache_size": 4294967295, 00:04:30.503 "dif_insert_or_strip": false, 00:04:30.503 "zcopy": false, 00:04:30.503 "c2h_success": true, 00:04:30.503 "sock_priority": 0, 00:04:30.503 "abort_timeout_sec": 1, 00:04:30.503 "ack_timeout": 0, 00:04:30.503 "data_wr_pool_size": 0 00:04:30.503 } 00:04:30.503 } 00:04:30.503 ] 00:04:30.503 }, 00:04:30.503 { 00:04:30.503 "subsystem": "iscsi", 00:04:30.503 "config": [ 00:04:30.503 { 00:04:30.503 "method": "iscsi_set_options", 00:04:30.503 "params": { 00:04:30.503 "node_base": "iqn.2016-06.io.spdk", 00:04:30.503 "max_sessions": 128, 00:04:30.503 "max_connections_per_session": 2, 00:04:30.503 "max_queue_depth": 64, 00:04:30.503 "default_time2wait": 2, 00:04:30.503 "default_time2retain": 20, 00:04:30.503 "first_burst_length": 8192, 00:04:30.503 "immediate_data": true, 00:04:30.503 "allow_duplicated_isid": false, 00:04:30.503 "error_recovery_level": 0, 00:04:30.503 "nop_timeout": 60, 00:04:30.503 "nop_in_interval": 30, 00:04:30.503 "disable_chap": false, 00:04:30.503 "require_chap": false, 00:04:30.503 "mutual_chap": false, 00:04:30.503 "chap_group": 0, 00:04:30.503 "max_large_datain_per_connection": 64, 00:04:30.503 "max_r2t_per_connection": 4, 00:04:30.503 "pdu_pool_size": 36864, 00:04:30.503 "immediate_data_pool_size": 16384, 00:04:30.503 "data_out_pool_size": 2048 00:04:30.503 } 00:04:30.503 } 00:04:30.503 ] 00:04:30.503 } 00:04:30.503 ] 00:04:30.503 } 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4059916 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4059916 ']' 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4059916 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4059916 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4059916' 00:04:30.503 killing process with pid 4059916 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4059916 00:04:30.503 03:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4059916 00:04:31.072 03:51:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:31.072 03:51:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4060136 00:04:31.072 03:51:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4060136 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4060136 ']' 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4060136 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4060136 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4060136' 00:04:36.342 killing process with pid 4060136 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4060136 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4060136 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:36.342 00:04:36.342 real 0m6.279s 00:04:36.342 user 0m5.985s 00:04:36.342 sys 0m0.594s 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.342 ************************************ 00:04:36.342 END TEST skip_rpc_with_json 00:04:36.342 ************************************ 00:04:36.342 03:51:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:36.342 03:51:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.342 03:51:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.342 03:51:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.342 ************************************ 00:04:36.342 START TEST skip_rpc_with_delay 00:04:36.342 ************************************ 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.342 [2024-12-10 03:51:35.557567] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:36.342 00:04:36.342 real 0m0.069s 00:04:36.342 user 0m0.046s 00:04:36.342 sys 0m0.023s 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.342 03:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:36.342 ************************************ 00:04:36.342 END TEST skip_rpc_with_delay 00:04:36.342 ************************************ 00:04:36.342 03:51:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:36.342 03:51:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:36.342 03:51:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:36.342 03:51:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.342 03:51:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.342 03:51:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.602 ************************************ 00:04:36.602 START TEST exit_on_failed_rpc_init 00:04:36.602 ************************************ 00:04:36.602 03:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:36.602 03:51:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4061097 00:04:36.602 03:51:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4061097 00:04:36.602 03:51:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.602 03:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 4061097 ']' 00:04:36.602 03:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.602 03:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.602 03:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.602 03:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.602 03:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.602 [2024-12-10 03:51:35.694622] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:36.602 [2024-12-10 03:51:35.694663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061097 ] 00:04:36.602 [2024-12-10 03:51:35.766740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.602 [2024-12-10 03:51:35.807035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:36.861 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:36.861 [2024-12-10 03:51:36.076099] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:36.861 [2024-12-10 03:51:36.076143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061105 ] 00:04:37.120 [2024-12-10 03:51:36.148635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.120 [2024-12-10 03:51:36.187588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.120 [2024-12-10 03:51:36.187640] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:37.120 [2024-12-10 03:51:36.187650] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:37.120 [2024-12-10 03:51:36.187656] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4061097 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 4061097 ']' 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 4061097 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4061097 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4061097' 00:04:37.120 killing process with pid 4061097 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 4061097 00:04:37.120 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 4061097 00:04:37.379 00:04:37.379 real 0m0.941s 00:04:37.379 user 0m1.012s 00:04:37.379 sys 0m0.375s 00:04:37.379 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.379 03:51:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.379 ************************************ 00:04:37.379 END TEST exit_on_failed_rpc_init 00:04:37.379 ************************************ 00:04:37.379 03:51:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:37.379 00:04:37.379 real 0m13.103s 00:04:37.379 user 0m12.353s 00:04:37.379 sys 0m1.568s 00:04:37.380 03:51:36 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.380 03:51:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.380 ************************************ 00:04:37.380 END TEST skip_rpc 00:04:37.380 ************************************ 00:04:37.380 03:51:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:37.380 03:51:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.380 03:51:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.380 03:51:36 -- common/autotest_common.sh@10 -- # set +x 00:04:37.639 ************************************ 00:04:37.639 START TEST rpc_client 00:04:37.639 ************************************ 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:37.639 * Looking for test storage... 00:04:37.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.639 03:51:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.639 --rc genhtml_branch_coverage=1 00:04:37.639 --rc genhtml_function_coverage=1 00:04:37.639 --rc genhtml_legend=1 00:04:37.639 --rc geninfo_all_blocks=1 00:04:37.639 --rc geninfo_unexecuted_blocks=1 00:04:37.639 00:04:37.639 ' 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.639 --rc genhtml_branch_coverage=1 00:04:37.639 --rc genhtml_function_coverage=1 00:04:37.639 --rc genhtml_legend=1 00:04:37.639 --rc geninfo_all_blocks=1 00:04:37.639 --rc geninfo_unexecuted_blocks=1 00:04:37.639 00:04:37.639 ' 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.639 --rc genhtml_branch_coverage=1 00:04:37.639 --rc genhtml_function_coverage=1 00:04:37.639 --rc genhtml_legend=1 00:04:37.639 --rc geninfo_all_blocks=1 00:04:37.639 --rc geninfo_unexecuted_blocks=1 00:04:37.639 00:04:37.639 ' 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.639 --rc genhtml_branch_coverage=1 00:04:37.639 --rc genhtml_function_coverage=1 00:04:37.639 --rc genhtml_legend=1 00:04:37.639 --rc geninfo_all_blocks=1 00:04:37.639 --rc geninfo_unexecuted_blocks=1 00:04:37.639 00:04:37.639 ' 00:04:37.639 03:51:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:37.639 OK 00:04:37.639 03:51:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:37.639 00:04:37.639 real 0m0.199s 00:04:37.639 user 0m0.119s 00:04:37.639 sys 0m0.094s 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.639 03:51:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:37.639 ************************************ 00:04:37.639 END TEST rpc_client 00:04:37.639 ************************************ 00:04:37.639 03:51:36 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:37.639 03:51:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.639 03:51:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.639 03:51:36 -- common/autotest_common.sh@10 -- # set +x 00:04:37.899 ************************************ 00:04:37.899 START TEST json_config 00:04:37.899 ************************************ 00:04:37.899 03:51:36 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:37.899 03:51:37 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.899 03:51:37 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.899 03:51:37 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.899 03:51:37 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.899 03:51:37 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.899 03:51:37 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.899 03:51:37 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.899 03:51:37 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.899 03:51:37 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.899 03:51:37 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.899 03:51:37 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.899 03:51:37 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.899 03:51:37 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.899 03:51:37 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.899 03:51:37 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.899 03:51:37 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:37.899 03:51:37 json_config -- scripts/common.sh@345 -- # : 1 00:04:37.899 03:51:37 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.899 03:51:37 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.899 03:51:37 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:37.899 03:51:37 json_config -- scripts/common.sh@353 -- # local d=1 00:04:37.899 03:51:37 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.899 03:51:37 json_config -- scripts/common.sh@355 -- # echo 1 00:04:37.899 03:51:37 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.899 03:51:37 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:37.899 03:51:37 json_config -- scripts/common.sh@353 -- # local d=2 00:04:37.899 03:51:37 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.899 03:51:37 json_config -- scripts/common.sh@355 -- # echo 2 00:04:37.899 03:51:37 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.899 03:51:37 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.899 03:51:37 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.899 03:51:37 json_config -- scripts/common.sh@368 -- # return 0 00:04:37.899 03:51:37 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.899 03:51:37 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.899 --rc genhtml_branch_coverage=1 00:04:37.899 --rc genhtml_function_coverage=1 00:04:37.899 --rc genhtml_legend=1 00:04:37.899 --rc geninfo_all_blocks=1 00:04:37.899 --rc geninfo_unexecuted_blocks=1 00:04:37.899 00:04:37.899 ' 00:04:37.899 03:51:37 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.899 --rc genhtml_branch_coverage=1 00:04:37.899 --rc genhtml_function_coverage=1 00:04:37.899 --rc genhtml_legend=1 00:04:37.899 --rc geninfo_all_blocks=1 00:04:37.899 --rc geninfo_unexecuted_blocks=1 00:04:37.899 00:04:37.899 ' 00:04:37.899 03:51:37 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.899 --rc genhtml_branch_coverage=1 00:04:37.899 --rc genhtml_function_coverage=1 00:04:37.899 --rc genhtml_legend=1 00:04:37.899 --rc geninfo_all_blocks=1 00:04:37.899 --rc geninfo_unexecuted_blocks=1 00:04:37.899 00:04:37.899 ' 00:04:37.899 03:51:37 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.899 --rc genhtml_branch_coverage=1 00:04:37.899 --rc genhtml_function_coverage=1 00:04:37.899 --rc genhtml_legend=1 00:04:37.899 --rc geninfo_all_blocks=1 00:04:37.899 --rc geninfo_unexecuted_blocks=1 00:04:37.899 00:04:37.899 ' 00:04:37.899 03:51:37 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.899 03:51:37 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:37.899 03:51:37 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:37.899 03:51:37 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.899 03:51:37 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.899 03:51:37 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.899 03:51:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.899 03:51:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.899 03:51:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.899 03:51:37 json_config -- paths/export.sh@5 -- # export PATH 00:04:37.900 03:51:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.900 03:51:37 json_config -- nvmf/common.sh@51 -- # : 0 00:04:37.900 03:51:37 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:37.900 03:51:37 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:37.900 03:51:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.900 03:51:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.900 03:51:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.900 03:51:37 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:37.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:37.900 03:51:37 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:37.900 03:51:37 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:37.900 03:51:37 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:37.900 INFO: JSON configuration test init 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:37.900 03:51:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.900 03:51:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:37.900 03:51:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.900 03:51:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.900 03:51:37 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:37.900 03:51:37 json_config -- json_config/common.sh@9 -- # local app=target 00:04:37.900 03:51:37 json_config -- json_config/common.sh@10 -- # shift 00:04:37.900 03:51:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.900 03:51:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.900 03:51:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.900 03:51:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.900 03:51:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.900 03:51:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4061451 00:04:37.900 03:51:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.900 Waiting for target to run... 00:04:37.900 03:51:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:37.900 03:51:37 json_config -- json_config/common.sh@25 -- # waitforlisten 4061451 /var/tmp/spdk_tgt.sock 00:04:37.900 03:51:37 json_config -- common/autotest_common.sh@835 -- # '[' -z 4061451 ']' 00:04:37.900 03:51:37 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.900 03:51:37 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.900 03:51:37 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.900 03:51:37 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.900 03:51:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.159 [2024-12-10 03:51:37.204665] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:38.159 [2024-12-10 03:51:37.204713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061451 ] 00:04:38.418 [2024-12-10 03:51:37.657806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.676 [2024-12-10 03:51:37.713671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.935 03:51:38 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.935 03:51:38 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:38.935 03:51:38 json_config -- json_config/common.sh@26 -- # echo '' 00:04:38.935 00:04:38.935 03:51:38 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:38.935 03:51:38 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:38.935 03:51:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.935 03:51:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.935 03:51:38 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:38.935 03:51:38 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:38.935 03:51:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.935 03:51:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.935 03:51:38 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:38.935 03:51:38 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:38.935 03:51:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:42.223 03:51:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.223 03:51:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:42.223 03:51:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@54 -- # sort 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:42.223 03:51:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.223 03:51:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:42.223 03:51:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.223 03:51:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:42.223 03:51:41 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.223 03:51:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.481 MallocForNvmf0 00:04:42.481 03:51:41 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.481 03:51:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.740 MallocForNvmf1 00:04:42.740 03:51:41 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:42.740 03:51:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:42.740 [2024-12-10 03:51:41.991211] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.740 03:51:42 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:42.740 03:51:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:42.999 03:51:42 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:42.999 03:51:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.258 03:51:42 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.258 03:51:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.516 03:51:42 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:43.517 03:51:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:43.517 [2024-12-10 03:51:42.797636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:43.775 03:51:42 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:43.775 03:51:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.775 03:51:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.775 03:51:42 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:43.775 03:51:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.775 03:51:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.775 03:51:42 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:43.775 03:51:42 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:43.775 03:51:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.033 MallocBdevForConfigChangeCheck 00:04:44.033 03:51:43 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:44.033 03:51:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.033 03:51:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.033 03:51:43 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:44.033 03:51:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.292 03:51:43 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:44.292 INFO: shutting down applications... 00:04:44.292 03:51:43 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:44.292 03:51:43 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:44.292 03:51:43 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:44.292 03:51:43 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:46.196 Calling clear_iscsi_subsystem 00:04:46.196 Calling clear_nvmf_subsystem 00:04:46.196 Calling clear_nbd_subsystem 00:04:46.196 Calling clear_ublk_subsystem 00:04:46.196 Calling clear_vhost_blk_subsystem 00:04:46.196 Calling clear_vhost_scsi_subsystem 00:04:46.196 Calling clear_bdev_subsystem 00:04:46.196 03:51:45 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:46.196 03:51:45 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:46.196 03:51:45 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:46.196 03:51:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.196 03:51:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:46.196 03:51:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:46.196 03:51:45 json_config -- json_config/json_config.sh@352 -- # break 00:04:46.196 03:51:45 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:46.196 03:51:45 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:46.196 03:51:45 json_config -- json_config/common.sh@31 -- # local app=target 00:04:46.196 03:51:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:46.196 03:51:45 json_config -- json_config/common.sh@35 -- # [[ -n 4061451 ]] 00:04:46.196 03:51:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4061451 00:04:46.196 03:51:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:46.196 03:51:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.196 03:51:45 json_config -- json_config/common.sh@41 -- # kill -0 4061451 00:04:46.196 03:51:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.764 03:51:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.764 03:51:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.764 03:51:45 json_config -- json_config/common.sh@41 -- # kill -0 4061451 00:04:46.764 03:51:45 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.764 03:51:45 json_config -- json_config/common.sh@43 -- # break 00:04:46.764 03:51:45 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.764 03:51:45 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.764 SPDK target shutdown done 00:04:46.764 03:51:45 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:46.764 INFO: relaunching applications... 00:04:46.764 03:51:45 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.764 03:51:45 json_config -- json_config/common.sh@9 -- # local app=target 00:04:46.764 03:51:45 json_config -- json_config/common.sh@10 -- # shift 00:04:46.764 03:51:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.764 03:51:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.764 03:51:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.764 03:51:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.764 03:51:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.764 03:51:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4062937 00:04:46.764 03:51:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.764 Waiting for target to run... 00:04:46.764 03:51:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.764 03:51:45 json_config -- json_config/common.sh@25 -- # waitforlisten 4062937 /var/tmp/spdk_tgt.sock 00:04:46.764 03:51:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 4062937 ']' 00:04:46.764 03:51:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.764 03:51:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.764 03:51:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.764 03:51:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.764 03:51:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.764 [2024-12-10 03:51:46.026717] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:46.764 [2024-12-10 03:51:46.026779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4062937 ] 00:04:47.331 [2024-12-10 03:51:46.489408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.331 [2024-12-10 03:51:46.545213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.619 [2024-12-10 03:51:49.572997] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.619 [2024-12-10 03:51:49.605278] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:51.186 03:51:50 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.186 03:51:50 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:51.186 03:51:50 json_config -- json_config/common.sh@26 -- # echo '' 00:04:51.186 00:04:51.186 03:51:50 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:51.186 03:51:50 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:51.186 INFO: Checking if target configuration is the same... 00:04:51.186 03:51:50 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.186 03:51:50 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:51.186 03:51:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.186 + '[' 2 -ne 2 ']' 00:04:51.186 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:51.186 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:51.186 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:51.186 +++ basename /dev/fd/62 00:04:51.186 ++ mktemp /tmp/62.XXX 00:04:51.186 + tmp_file_1=/tmp/62.ZVD 00:04:51.186 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.186 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:51.186 + tmp_file_2=/tmp/spdk_tgt_config.json.AZd 00:04:51.186 + ret=0 00:04:51.186 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.445 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.445 + diff -u /tmp/62.ZVD /tmp/spdk_tgt_config.json.AZd 00:04:51.445 + echo 'INFO: JSON config files are the same' 00:04:51.445 INFO: JSON config files are the same 00:04:51.445 + rm /tmp/62.ZVD /tmp/spdk_tgt_config.json.AZd 00:04:51.445 + exit 0 00:04:51.445 03:51:50 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:51.445 03:51:50 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:51.445 INFO: changing configuration and checking if this can be detected... 00:04:51.445 03:51:50 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:51.445 03:51:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:51.704 03:51:50 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.704 03:51:50 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:51.704 03:51:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.704 + '[' 2 -ne 2 ']' 00:04:51.704 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:51.704 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:51.704 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:51.704 +++ basename /dev/fd/62 00:04:51.704 ++ mktemp /tmp/62.XXX 00:04:51.704 + tmp_file_1=/tmp/62.Uqg 00:04:51.704 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.704 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:51.704 + tmp_file_2=/tmp/spdk_tgt_config.json.wo9 00:04:51.704 + ret=0 00:04:51.704 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.962 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.962 + diff -u /tmp/62.Uqg /tmp/spdk_tgt_config.json.wo9 00:04:51.962 + ret=1 00:04:51.962 + echo '=== Start of file: /tmp/62.Uqg ===' 00:04:51.962 + cat /tmp/62.Uqg 00:04:51.962 + echo '=== End of file: /tmp/62.Uqg ===' 00:04:51.962 + echo '' 00:04:51.962 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wo9 ===' 00:04:51.962 + cat /tmp/spdk_tgt_config.json.wo9 00:04:51.962 + echo '=== End of file: /tmp/spdk_tgt_config.json.wo9 ===' 00:04:51.962 + echo '' 00:04:51.962 + rm /tmp/62.Uqg /tmp/spdk_tgt_config.json.wo9 00:04:51.962 + exit 1 00:04:51.962 03:51:51 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:51.962 INFO: configuration change detected. 00:04:51.962 03:51:51 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:51.962 03:51:51 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:51.962 03:51:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.962 03:51:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.962 03:51:51 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:51.962 03:51:51 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:51.962 03:51:51 json_config -- json_config/json_config.sh@324 -- # [[ -n 4062937 ]] 00:04:51.962 03:51:51 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:51.962 03:51:51 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:51.962 03:51:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.963 03:51:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.222 03:51:51 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:52.222 03:51:51 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:52.222 03:51:51 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:52.222 03:51:51 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:52.222 03:51:51 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:52.222 03:51:51 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.222 03:51:51 json_config -- json_config/json_config.sh@330 -- # killprocess 4062937 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@954 -- # '[' -z 4062937 ']' 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@958 -- # kill -0 4062937 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@959 -- # uname 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4062937 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4062937' 00:04:52.222 killing process with pid 4062937 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@973 -- # kill 4062937 00:04:52.222 03:51:51 json_config -- common/autotest_common.sh@978 -- # wait 4062937 00:04:53.600 03:51:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.600 03:51:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:53.600 03:51:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.600 03:51:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.600 03:51:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:53.600 03:51:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:53.600 INFO: Success 00:04:53.600 00:04:53.600 real 0m15.926s 00:04:53.600 user 0m16.350s 00:04:53.600 sys 0m2.776s 00:04:53.600 03:51:52 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.600 03:51:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.600 ************************************ 00:04:53.600 END TEST json_config 00:04:53.600 ************************************ 00:04:53.860 03:51:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:53.860 03:51:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.860 03:51:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.860 03:51:52 -- common/autotest_common.sh@10 -- # set +x 00:04:53.860 ************************************ 00:04:53.860 START TEST json_config_extra_key 00:04:53.860 ************************************ 00:04:53.860 03:51:52 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:53.860 03:51:53 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.860 03:51:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.860 03:51:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.860 03:51:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:53.860 03:51:53 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.860 03:51:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.860 --rc genhtml_branch_coverage=1 00:04:53.860 --rc genhtml_function_coverage=1 00:04:53.860 --rc genhtml_legend=1 00:04:53.860 --rc geninfo_all_blocks=1 00:04:53.860 --rc geninfo_unexecuted_blocks=1 00:04:53.860 00:04:53.860 ' 00:04:53.860 03:51:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.860 --rc genhtml_branch_coverage=1 00:04:53.860 --rc genhtml_function_coverage=1 00:04:53.860 --rc genhtml_legend=1 00:04:53.860 --rc geninfo_all_blocks=1 00:04:53.860 --rc geninfo_unexecuted_blocks=1 00:04:53.860 00:04:53.860 ' 00:04:53.860 03:51:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.860 --rc genhtml_branch_coverage=1 00:04:53.860 --rc genhtml_function_coverage=1 00:04:53.860 --rc genhtml_legend=1 00:04:53.860 --rc geninfo_all_blocks=1 00:04:53.860 --rc geninfo_unexecuted_blocks=1 00:04:53.860 00:04:53.860 ' 00:04:53.860 03:51:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.860 --rc genhtml_branch_coverage=1 00:04:53.860 --rc genhtml_function_coverage=1 00:04:53.860 --rc genhtml_legend=1 00:04:53.860 --rc geninfo_all_blocks=1 00:04:53.860 --rc geninfo_unexecuted_blocks=1 00:04:53.860 00:04:53.860 ' 00:04:53.860 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.860 03:51:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.860 03:51:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.860 03:51:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.861 03:51:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.861 03:51:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.861 03:51:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:53.861 03:51:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.861 03:51:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:53.861 03:51:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.861 03:51:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.861 03:51:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.861 03:51:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.861 03:51:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.861 03:51:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.861 03:51:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.861 03:51:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.861 03:51:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:53.861 INFO: launching applications... 00:04:53.861 03:51:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:53.861 03:51:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:53.861 03:51:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:53.861 03:51:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.861 03:51:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.861 03:51:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.861 03:51:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.861 03:51:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.861 03:51:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4064391 00:04:53.861 03:51:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.861 Waiting for target to run... 00:04:53.861 03:51:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4064391 /var/tmp/spdk_tgt.sock 00:04:53.861 03:51:53 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 4064391 ']' 00:04:53.861 03:51:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:53.861 03:51:53 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.861 03:51:53 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.861 03:51:53 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.861 03:51:53 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.861 03:51:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:54.120 [2024-12-10 03:51:53.191689] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:54.120 [2024-12-10 03:51:53.191735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064391 ] 00:04:54.379 [2024-12-10 03:51:53.473009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.379 [2024-12-10 03:51:53.505407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.947 03:51:54 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.947 03:51:54 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:54.947 03:51:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:54.947 00:04:54.947 03:51:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:54.947 INFO: shutting down applications... 00:04:54.947 03:51:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:54.947 03:51:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:54.947 03:51:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:54.948 03:51:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4064391 ]] 00:04:54.948 03:51:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4064391 00:04:54.948 03:51:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:54.948 03:51:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.948 03:51:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4064391 00:04:54.948 03:51:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.520 03:51:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.520 03:51:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.520 03:51:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4064391 00:04:55.520 03:51:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:55.520 03:51:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:55.520 03:51:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:55.520 03:51:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:55.520 SPDK target shutdown done 00:04:55.520 03:51:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:55.520 Success 00:04:55.520 00:04:55.520 real 0m1.565s 00:04:55.520 user 0m1.334s 00:04:55.520 sys 0m0.401s 00:04:55.520 03:51:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.520 03:51:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.520 ************************************ 00:04:55.520 END TEST json_config_extra_key 00:04:55.520 ************************************ 00:04:55.520 03:51:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.520 03:51:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.520 03:51:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.520 03:51:54 -- common/autotest_common.sh@10 -- # set +x 00:04:55.520 ************************************ 00:04:55.520 START TEST alias_rpc 00:04:55.520 ************************************ 00:04:55.520 03:51:54 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.520 * Looking for test storage... 00:04:55.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:55.520 03:51:54 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.520 03:51:54 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.520 03:51:54 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.520 03:51:54 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.520 03:51:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:55.520 03:51:54 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.521 03:51:54 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.521 --rc genhtml_branch_coverage=1 00:04:55.521 --rc genhtml_function_coverage=1 00:04:55.521 --rc genhtml_legend=1 00:04:55.521 --rc geninfo_all_blocks=1 00:04:55.521 --rc geninfo_unexecuted_blocks=1 00:04:55.521 00:04:55.521 ' 00:04:55.521 03:51:54 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.521 --rc genhtml_branch_coverage=1 00:04:55.521 --rc genhtml_function_coverage=1 00:04:55.521 --rc genhtml_legend=1 00:04:55.521 --rc geninfo_all_blocks=1 00:04:55.521 --rc geninfo_unexecuted_blocks=1 00:04:55.521 00:04:55.521 ' 00:04:55.521 03:51:54 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.521 --rc genhtml_branch_coverage=1 00:04:55.521 --rc genhtml_function_coverage=1 00:04:55.521 --rc genhtml_legend=1 00:04:55.521 --rc geninfo_all_blocks=1 00:04:55.521 --rc geninfo_unexecuted_blocks=1 00:04:55.521 00:04:55.521 ' 00:04:55.521 03:51:54 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.521 --rc genhtml_branch_coverage=1 00:04:55.521 --rc genhtml_function_coverage=1 00:04:55.521 --rc genhtml_legend=1 00:04:55.521 --rc geninfo_all_blocks=1 00:04:55.521 --rc geninfo_unexecuted_blocks=1 00:04:55.521 00:04:55.521 ' 00:04:55.521 03:51:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:55.521 03:51:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4064677 00:04:55.521 03:51:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4064677 00:04:55.521 03:51:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.521 03:51:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 4064677 ']' 00:04:55.521 03:51:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.521 03:51:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.521 03:51:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.521 03:51:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.521 03:51:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.780 [2024-12-10 03:51:54.820936] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:55.780 [2024-12-10 03:51:54.820984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064677 ] 00:04:55.780 [2024-12-10 03:51:54.893918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.780 [2024-12-10 03:51:54.934340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.037 03:51:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.037 03:51:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:56.037 03:51:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:56.296 03:51:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4064677 00:04:56.296 03:51:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 4064677 ']' 00:04:56.296 03:51:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 4064677 00:04:56.296 03:51:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:56.296 03:51:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.296 03:51:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4064677 00:04:56.296 03:51:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.296 03:51:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.296 03:51:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4064677' 00:04:56.296 killing process with pid 4064677 00:04:56.296 03:51:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 4064677 00:04:56.296 03:51:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 4064677 00:04:56.555 00:04:56.555 real 0m1.126s 00:04:56.555 user 0m1.143s 00:04:56.555 sys 0m0.419s 00:04:56.555 03:51:55 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.555 03:51:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.555 ************************************ 00:04:56.555 END TEST alias_rpc 00:04:56.555 ************************************ 00:04:56.555 03:51:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:56.555 03:51:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:56.555 03:51:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.555 03:51:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.555 03:51:55 -- common/autotest_common.sh@10 -- # set +x 00:04:56.555 ************************************ 00:04:56.555 START TEST spdkcli_tcp 00:04:56.555 ************************************ 00:04:56.555 03:51:55 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:56.814 * Looking for test storage... 00:04:56.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:56.814 03:51:55 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:56.814 03:51:55 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:56.814 03:51:55 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:56.814 03:51:55 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.814 03:51:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:56.815 03:51:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:56.815 03:51:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.815 03:51:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:56.815 03:51:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.815 03:51:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.815 03:51:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.815 03:51:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:56.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.815 --rc genhtml_branch_coverage=1 00:04:56.815 --rc genhtml_function_coverage=1 00:04:56.815 --rc genhtml_legend=1 00:04:56.815 --rc geninfo_all_blocks=1 00:04:56.815 --rc geninfo_unexecuted_blocks=1 00:04:56.815 00:04:56.815 ' 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:56.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.815 --rc genhtml_branch_coverage=1 00:04:56.815 --rc genhtml_function_coverage=1 00:04:56.815 --rc genhtml_legend=1 00:04:56.815 --rc geninfo_all_blocks=1 00:04:56.815 --rc geninfo_unexecuted_blocks=1 00:04:56.815 00:04:56.815 ' 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:56.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.815 --rc genhtml_branch_coverage=1 00:04:56.815 --rc genhtml_function_coverage=1 00:04:56.815 --rc genhtml_legend=1 00:04:56.815 --rc geninfo_all_blocks=1 00:04:56.815 --rc geninfo_unexecuted_blocks=1 00:04:56.815 00:04:56.815 ' 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:56.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.815 --rc genhtml_branch_coverage=1 00:04:56.815 --rc genhtml_function_coverage=1 00:04:56.815 --rc genhtml_legend=1 00:04:56.815 --rc geninfo_all_blocks=1 00:04:56.815 --rc geninfo_unexecuted_blocks=1 00:04:56.815 00:04:56.815 ' 00:04:56.815 03:51:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:56.815 03:51:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:56.815 03:51:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:56.815 03:51:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:56.815 03:51:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:56.815 03:51:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:56.815 03:51:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.815 03:51:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4064959 00:04:56.815 03:51:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4064959 00:04:56.815 03:51:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 4064959 ']' 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.815 03:51:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.815 [2024-12-10 03:51:56.018559] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:56.815 [2024-12-10 03:51:56.018608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064959 ] 00:04:56.815 [2024-12-10 03:51:56.091980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.074 [2024-12-10 03:51:56.132159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.074 [2024-12-10 03:51:56.132160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.074 03:51:56 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.074 03:51:56 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:57.074 03:51:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4064970 00:04:57.074 03:51:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:57.074 03:51:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:57.333 [ 00:04:57.333 "bdev_malloc_delete", 00:04:57.333 "bdev_malloc_create", 00:04:57.333 "bdev_null_resize", 00:04:57.333 "bdev_null_delete", 00:04:57.333 "bdev_null_create", 00:04:57.333 "bdev_nvme_cuse_unregister", 00:04:57.333 "bdev_nvme_cuse_register", 00:04:57.333 "bdev_opal_new_user", 00:04:57.333 "bdev_opal_set_lock_state", 00:04:57.333 "bdev_opal_delete", 00:04:57.333 "bdev_opal_get_info", 00:04:57.333 "bdev_opal_create", 00:04:57.333 "bdev_nvme_opal_revert", 00:04:57.333 "bdev_nvme_opal_init", 00:04:57.333 "bdev_nvme_send_cmd", 00:04:57.333 "bdev_nvme_set_keys", 00:04:57.333 "bdev_nvme_get_path_iostat", 00:04:57.333 "bdev_nvme_get_mdns_discovery_info", 00:04:57.333 "bdev_nvme_stop_mdns_discovery", 00:04:57.333 "bdev_nvme_start_mdns_discovery", 00:04:57.333 "bdev_nvme_set_multipath_policy", 00:04:57.333 "bdev_nvme_set_preferred_path", 00:04:57.333 "bdev_nvme_get_io_paths", 00:04:57.333 "bdev_nvme_remove_error_injection", 00:04:57.333 "bdev_nvme_add_error_injection", 00:04:57.333 "bdev_nvme_get_discovery_info", 00:04:57.333 "bdev_nvme_stop_discovery", 00:04:57.333 "bdev_nvme_start_discovery", 00:04:57.333 "bdev_nvme_get_controller_health_info", 00:04:57.333 "bdev_nvme_disable_controller", 00:04:57.333 "bdev_nvme_enable_controller", 00:04:57.333 "bdev_nvme_reset_controller", 00:04:57.333 "bdev_nvme_get_transport_statistics", 00:04:57.333 "bdev_nvme_apply_firmware", 00:04:57.333 "bdev_nvme_detach_controller", 00:04:57.333 "bdev_nvme_get_controllers", 00:04:57.333 "bdev_nvme_attach_controller", 00:04:57.333 "bdev_nvme_set_hotplug", 00:04:57.333 "bdev_nvme_set_options", 00:04:57.333 "bdev_passthru_delete", 00:04:57.333 "bdev_passthru_create", 00:04:57.333 "bdev_lvol_set_parent_bdev", 00:04:57.333 "bdev_lvol_set_parent", 00:04:57.333 "bdev_lvol_check_shallow_copy", 00:04:57.333 "bdev_lvol_start_shallow_copy", 00:04:57.333 "bdev_lvol_grow_lvstore", 00:04:57.333 "bdev_lvol_get_lvols", 00:04:57.333 "bdev_lvol_get_lvstores", 00:04:57.333 "bdev_lvol_delete", 00:04:57.333 "bdev_lvol_set_read_only", 00:04:57.333 "bdev_lvol_resize", 00:04:57.333 "bdev_lvol_decouple_parent", 00:04:57.333 "bdev_lvol_inflate", 00:04:57.333 "bdev_lvol_rename", 00:04:57.333 "bdev_lvol_clone_bdev", 00:04:57.333 "bdev_lvol_clone", 00:04:57.333 "bdev_lvol_snapshot", 00:04:57.333 "bdev_lvol_create", 00:04:57.333 "bdev_lvol_delete_lvstore", 00:04:57.333 "bdev_lvol_rename_lvstore", 00:04:57.333 "bdev_lvol_create_lvstore", 00:04:57.333 "bdev_raid_set_options", 00:04:57.333 "bdev_raid_remove_base_bdev", 00:04:57.333 "bdev_raid_add_base_bdev", 00:04:57.333 "bdev_raid_delete", 00:04:57.333 "bdev_raid_create", 00:04:57.333 "bdev_raid_get_bdevs", 00:04:57.333 "bdev_error_inject_error", 00:04:57.333 "bdev_error_delete", 00:04:57.333 "bdev_error_create", 00:04:57.333 "bdev_split_delete", 00:04:57.333 "bdev_split_create", 00:04:57.333 "bdev_delay_delete", 00:04:57.333 "bdev_delay_create", 00:04:57.333 "bdev_delay_update_latency", 00:04:57.333 "bdev_zone_block_delete", 00:04:57.334 "bdev_zone_block_create", 00:04:57.334 "blobfs_create", 00:04:57.334 "blobfs_detect", 00:04:57.334 "blobfs_set_cache_size", 00:04:57.334 "bdev_aio_delete", 00:04:57.334 "bdev_aio_rescan", 00:04:57.334 "bdev_aio_create", 00:04:57.334 "bdev_ftl_set_property", 00:04:57.334 "bdev_ftl_get_properties", 00:04:57.334 "bdev_ftl_get_stats", 00:04:57.334 "bdev_ftl_unmap", 00:04:57.334 "bdev_ftl_unload", 00:04:57.334 "bdev_ftl_delete", 00:04:57.334 "bdev_ftl_load", 00:04:57.334 "bdev_ftl_create", 00:04:57.334 "bdev_virtio_attach_controller", 00:04:57.334 "bdev_virtio_scsi_get_devices", 00:04:57.334 "bdev_virtio_detach_controller", 00:04:57.334 "bdev_virtio_blk_set_hotplug", 00:04:57.334 "bdev_iscsi_delete", 00:04:57.334 "bdev_iscsi_create", 00:04:57.334 "bdev_iscsi_set_options", 00:04:57.334 "accel_error_inject_error", 00:04:57.334 "ioat_scan_accel_module", 00:04:57.334 "dsa_scan_accel_module", 00:04:57.334 "iaa_scan_accel_module", 00:04:57.334 "vfu_virtio_create_fs_endpoint", 00:04:57.334 "vfu_virtio_create_scsi_endpoint", 00:04:57.334 "vfu_virtio_scsi_remove_target", 00:04:57.334 "vfu_virtio_scsi_add_target", 00:04:57.334 "vfu_virtio_create_blk_endpoint", 00:04:57.334 "vfu_virtio_delete_endpoint", 00:04:57.334 "keyring_file_remove_key", 00:04:57.334 "keyring_file_add_key", 00:04:57.334 "keyring_linux_set_options", 00:04:57.334 "fsdev_aio_delete", 00:04:57.334 "fsdev_aio_create", 00:04:57.334 "iscsi_get_histogram", 00:04:57.334 "iscsi_enable_histogram", 00:04:57.334 "iscsi_set_options", 00:04:57.334 "iscsi_get_auth_groups", 00:04:57.334 "iscsi_auth_group_remove_secret", 00:04:57.334 "iscsi_auth_group_add_secret", 00:04:57.334 "iscsi_delete_auth_group", 00:04:57.334 "iscsi_create_auth_group", 00:04:57.334 "iscsi_set_discovery_auth", 00:04:57.334 "iscsi_get_options", 00:04:57.334 "iscsi_target_node_request_logout", 00:04:57.334 "iscsi_target_node_set_redirect", 00:04:57.334 "iscsi_target_node_set_auth", 00:04:57.334 "iscsi_target_node_add_lun", 00:04:57.334 "iscsi_get_stats", 00:04:57.334 "iscsi_get_connections", 00:04:57.334 "iscsi_portal_group_set_auth", 00:04:57.334 "iscsi_start_portal_group", 00:04:57.334 "iscsi_delete_portal_group", 00:04:57.334 "iscsi_create_portal_group", 00:04:57.334 "iscsi_get_portal_groups", 00:04:57.334 "iscsi_delete_target_node", 00:04:57.334 "iscsi_target_node_remove_pg_ig_maps", 00:04:57.334 "iscsi_target_node_add_pg_ig_maps", 00:04:57.334 "iscsi_create_target_node", 00:04:57.334 "iscsi_get_target_nodes", 00:04:57.334 "iscsi_delete_initiator_group", 00:04:57.334 "iscsi_initiator_group_remove_initiators", 00:04:57.334 "iscsi_initiator_group_add_initiators", 00:04:57.334 "iscsi_create_initiator_group", 00:04:57.334 "iscsi_get_initiator_groups", 00:04:57.334 "nvmf_set_crdt", 00:04:57.334 "nvmf_set_config", 00:04:57.334 "nvmf_set_max_subsystems", 00:04:57.334 "nvmf_stop_mdns_prr", 00:04:57.334 "nvmf_publish_mdns_prr", 00:04:57.334 "nvmf_subsystem_get_listeners", 00:04:57.334 "nvmf_subsystem_get_qpairs", 00:04:57.334 "nvmf_subsystem_get_controllers", 00:04:57.334 "nvmf_get_stats", 00:04:57.334 "nvmf_get_transports", 00:04:57.334 "nvmf_create_transport", 00:04:57.334 "nvmf_get_targets", 00:04:57.334 "nvmf_delete_target", 00:04:57.334 "nvmf_create_target", 00:04:57.334 "nvmf_subsystem_allow_any_host", 00:04:57.334 "nvmf_subsystem_set_keys", 00:04:57.334 "nvmf_subsystem_remove_host", 00:04:57.334 "nvmf_subsystem_add_host", 00:04:57.334 "nvmf_ns_remove_host", 00:04:57.334 "nvmf_ns_add_host", 00:04:57.334 "nvmf_subsystem_remove_ns", 00:04:57.334 "nvmf_subsystem_set_ns_ana_group", 00:04:57.334 "nvmf_subsystem_add_ns", 00:04:57.334 "nvmf_subsystem_listener_set_ana_state", 00:04:57.334 "nvmf_discovery_get_referrals", 00:04:57.334 "nvmf_discovery_remove_referral", 00:04:57.334 "nvmf_discovery_add_referral", 00:04:57.334 "nvmf_subsystem_remove_listener", 00:04:57.334 "nvmf_subsystem_add_listener", 00:04:57.334 "nvmf_delete_subsystem", 00:04:57.334 "nvmf_create_subsystem", 00:04:57.334 "nvmf_get_subsystems", 00:04:57.334 "env_dpdk_get_mem_stats", 00:04:57.334 "nbd_get_disks", 00:04:57.334 "nbd_stop_disk", 00:04:57.334 "nbd_start_disk", 00:04:57.334 "ublk_recover_disk", 00:04:57.334 "ublk_get_disks", 00:04:57.334 "ublk_stop_disk", 00:04:57.334 "ublk_start_disk", 00:04:57.334 "ublk_destroy_target", 00:04:57.334 "ublk_create_target", 00:04:57.334 "virtio_blk_create_transport", 00:04:57.334 "virtio_blk_get_transports", 00:04:57.334 "vhost_controller_set_coalescing", 00:04:57.334 "vhost_get_controllers", 00:04:57.334 "vhost_delete_controller", 00:04:57.334 "vhost_create_blk_controller", 00:04:57.334 "vhost_scsi_controller_remove_target", 00:04:57.334 "vhost_scsi_controller_add_target", 00:04:57.334 "vhost_start_scsi_controller", 00:04:57.334 "vhost_create_scsi_controller", 00:04:57.334 "thread_set_cpumask", 00:04:57.334 "scheduler_set_options", 00:04:57.334 "framework_get_governor", 00:04:57.334 "framework_get_scheduler", 00:04:57.334 "framework_set_scheduler", 00:04:57.334 "framework_get_reactors", 00:04:57.334 "thread_get_io_channels", 00:04:57.334 "thread_get_pollers", 00:04:57.334 "thread_get_stats", 00:04:57.334 "framework_monitor_context_switch", 00:04:57.334 "spdk_kill_instance", 00:04:57.334 "log_enable_timestamps", 00:04:57.334 "log_get_flags", 00:04:57.334 "log_clear_flag", 00:04:57.334 "log_set_flag", 00:04:57.334 "log_get_level", 00:04:57.334 "log_set_level", 00:04:57.334 "log_get_print_level", 00:04:57.334 "log_set_print_level", 00:04:57.334 "framework_enable_cpumask_locks", 00:04:57.334 "framework_disable_cpumask_locks", 00:04:57.334 "framework_wait_init", 00:04:57.334 "framework_start_init", 00:04:57.334 "scsi_get_devices", 00:04:57.334 "bdev_get_histogram", 00:04:57.334 "bdev_enable_histogram", 00:04:57.334 "bdev_set_qos_limit", 00:04:57.334 "bdev_set_qd_sampling_period", 00:04:57.334 "bdev_get_bdevs", 00:04:57.334 "bdev_reset_iostat", 00:04:57.334 "bdev_get_iostat", 00:04:57.334 "bdev_examine", 00:04:57.334 "bdev_wait_for_examine", 00:04:57.334 "bdev_set_options", 00:04:57.334 "accel_get_stats", 00:04:57.334 "accel_set_options", 00:04:57.334 "accel_set_driver", 00:04:57.334 "accel_crypto_key_destroy", 00:04:57.334 "accel_crypto_keys_get", 00:04:57.334 "accel_crypto_key_create", 00:04:57.334 "accel_assign_opc", 00:04:57.334 "accel_get_module_info", 00:04:57.334 "accel_get_opc_assignments", 00:04:57.334 "vmd_rescan", 00:04:57.334 "vmd_remove_device", 00:04:57.334 "vmd_enable", 00:04:57.334 "sock_get_default_impl", 00:04:57.334 "sock_set_default_impl", 00:04:57.334 "sock_impl_set_options", 00:04:57.334 "sock_impl_get_options", 00:04:57.334 "iobuf_get_stats", 00:04:57.334 "iobuf_set_options", 00:04:57.334 "keyring_get_keys", 00:04:57.334 "vfu_tgt_set_base_path", 00:04:57.334 "framework_get_pci_devices", 00:04:57.334 "framework_get_config", 00:04:57.334 "framework_get_subsystems", 00:04:57.334 "fsdev_set_opts", 00:04:57.334 "fsdev_get_opts", 00:04:57.334 "trace_get_info", 00:04:57.334 "trace_get_tpoint_group_mask", 00:04:57.334 "trace_disable_tpoint_group", 00:04:57.334 "trace_enable_tpoint_group", 00:04:57.334 "trace_clear_tpoint_mask", 00:04:57.335 "trace_set_tpoint_mask", 00:04:57.335 "notify_get_notifications", 00:04:57.335 "notify_get_types", 00:04:57.335 "spdk_get_version", 00:04:57.335 "rpc_get_methods" 00:04:57.335 ] 00:04:57.335 03:51:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:57.335 03:51:56 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.335 03:51:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.335 03:51:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:57.335 03:51:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4064959 00:04:57.335 03:51:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 4064959 ']' 00:04:57.335 03:51:56 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 4064959 00:04:57.335 03:51:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:57.335 03:51:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.335 03:51:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4064959 00:04:57.594 03:51:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.594 03:51:56 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.594 03:51:56 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4064959' 00:04:57.594 killing process with pid 4064959 00:04:57.594 03:51:56 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 4064959 00:04:57.594 03:51:56 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 4064959 00:04:57.853 00:04:57.853 real 0m1.139s 00:04:57.853 user 0m1.918s 00:04:57.853 sys 0m0.443s 00:04:57.853 03:51:56 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.853 03:51:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.853 ************************************ 00:04:57.853 END TEST spdkcli_tcp 00:04:57.853 ************************************ 00:04:57.853 03:51:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.853 03:51:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.853 03:51:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.853 03:51:56 -- common/autotest_common.sh@10 -- # set +x 00:04:57.853 ************************************ 00:04:57.853 START TEST dpdk_mem_utility 00:04:57.853 ************************************ 00:04:57.853 03:51:56 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.853 * Looking for test storage... 00:04:57.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:57.853 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.853 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.853 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.112 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.112 03:51:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.112 03:51:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.112 03:51:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.112 03:51:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.112 03:51:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.112 03:51:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.113 03:51:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:58.113 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.113 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.113 --rc genhtml_branch_coverage=1 00:04:58.113 --rc genhtml_function_coverage=1 00:04:58.113 --rc genhtml_legend=1 00:04:58.113 --rc geninfo_all_blocks=1 00:04:58.113 --rc geninfo_unexecuted_blocks=1 00:04:58.113 00:04:58.113 ' 00:04:58.113 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.113 --rc genhtml_branch_coverage=1 00:04:58.113 --rc genhtml_function_coverage=1 00:04:58.113 --rc genhtml_legend=1 00:04:58.113 --rc geninfo_all_blocks=1 00:04:58.113 --rc geninfo_unexecuted_blocks=1 00:04:58.113 00:04:58.113 ' 00:04:58.113 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.113 --rc genhtml_branch_coverage=1 00:04:58.113 --rc genhtml_function_coverage=1 00:04:58.113 --rc genhtml_legend=1 00:04:58.113 --rc geninfo_all_blocks=1 00:04:58.113 --rc geninfo_unexecuted_blocks=1 00:04:58.113 00:04:58.113 ' 00:04:58.113 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.113 --rc genhtml_branch_coverage=1 00:04:58.113 --rc genhtml_function_coverage=1 00:04:58.113 --rc genhtml_legend=1 00:04:58.113 --rc geninfo_all_blocks=1 00:04:58.113 --rc geninfo_unexecuted_blocks=1 00:04:58.113 00:04:58.113 ' 00:04:58.113 03:51:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:58.113 03:51:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4065256 00:04:58.113 03:51:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4065256 00:04:58.113 03:51:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.113 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 4065256 ']' 00:04:58.113 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.113 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.113 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.113 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.113 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.113 [2024-12-10 03:51:57.217335] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:58.113 [2024-12-10 03:51:57.217385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065256 ] 00:04:58.113 [2024-12-10 03:51:57.292169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.113 [2024-12-10 03:51:57.332564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.373 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.373 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:58.373 03:51:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:58.373 03:51:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:58.373 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.373 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.373 { 00:04:58.373 "filename": "/tmp/spdk_mem_dump.txt" 00:04:58.373 } 00:04:58.373 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.373 03:51:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:58.373 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:58.373 1 heaps totaling size 818.000000 MiB 00:04:58.373 size: 818.000000 MiB heap id: 0 00:04:58.373 end heaps---------- 00:04:58.373 9 mempools totaling size 603.782043 MiB 00:04:58.373 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:58.373 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:58.373 size: 100.555481 MiB name: bdev_io_4065256 00:04:58.373 size: 50.003479 MiB name: msgpool_4065256 00:04:58.373 size: 36.509338 MiB name: fsdev_io_4065256 00:04:58.373 size: 21.763794 MiB name: PDU_Pool 00:04:58.373 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:58.373 size: 4.133484 MiB name: evtpool_4065256 00:04:58.373 size: 0.026123 MiB name: Session_Pool 00:04:58.373 end mempools------- 00:04:58.373 6 memzones totaling size 4.142822 MiB 00:04:58.373 size: 1.000366 MiB name: RG_ring_0_4065256 00:04:58.373 size: 1.000366 MiB name: RG_ring_1_4065256 00:04:58.373 size: 1.000366 MiB name: RG_ring_4_4065256 00:04:58.373 size: 1.000366 MiB name: RG_ring_5_4065256 00:04:58.373 size: 0.125366 MiB name: RG_ring_2_4065256 00:04:58.373 size: 0.015991 MiB name: RG_ring_3_4065256 00:04:58.373 end memzones------- 00:04:58.373 03:51:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:58.373 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:58.373 list of free elements. size: 10.852478 MiB 00:04:58.373 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:58.373 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:58.373 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:58.373 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:58.373 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:58.373 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:58.373 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:58.373 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:58.373 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:58.373 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:58.373 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:58.373 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:58.373 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:58.373 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:58.373 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:58.373 list of standard malloc elements. size: 199.218628 MiB 00:04:58.373 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:58.373 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:58.373 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:58.373 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:58.373 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:58.373 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:58.373 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:58.373 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:58.373 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:58.373 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:58.373 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:58.373 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:58.373 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:58.373 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:58.373 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:58.373 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:58.373 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:58.373 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:58.373 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:58.373 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:58.373 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:58.373 list of memzone associated elements. size: 607.928894 MiB 00:04:58.373 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:58.373 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:58.373 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:58.373 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:58.373 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:58.373 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_4065256_0 00:04:58.373 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:58.373 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4065256_0 00:04:58.373 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:58.373 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_4065256_0 00:04:58.373 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:58.373 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:58.373 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:58.373 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:58.373 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:58.373 associated memzone info: size: 3.000122 MiB name: MP_evtpool_4065256_0 00:04:58.373 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:58.373 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4065256 00:04:58.373 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:58.373 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4065256 00:04:58.373 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:58.373 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:58.373 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:58.373 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:58.373 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:58.373 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:58.373 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:58.373 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:58.373 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:58.373 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4065256 00:04:58.373 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:58.373 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4065256 00:04:58.373 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:58.373 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4065256 00:04:58.373 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:58.373 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4065256 00:04:58.373 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:58.373 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_4065256 00:04:58.373 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:58.373 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4065256 00:04:58.373 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:58.373 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:58.373 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:58.373 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:58.373 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:58.373 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:58.373 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:58.373 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_4065256 00:04:58.373 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:58.374 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4065256 00:04:58.374 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:58.374 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:58.374 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:58.374 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:58.374 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:58.374 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4065256 00:04:58.374 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:58.374 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:58.374 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:58.374 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4065256 00:04:58.374 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:58.374 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_4065256 00:04:58.374 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:58.374 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4065256 00:04:58.374 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:58.374 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:58.374 03:51:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:58.374 03:51:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4065256 00:04:58.374 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 4065256 ']' 00:04:58.374 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 4065256 00:04:58.633 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:58.633 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.633 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4065256 00:04:58.633 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.633 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.633 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4065256' 00:04:58.633 killing process with pid 4065256 00:04:58.633 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 4065256 00:04:58.633 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 4065256 00:04:58.891 00:04:58.891 real 0m0.995s 00:04:58.891 user 0m0.916s 00:04:58.891 sys 0m0.407s 00:04:58.891 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.891 03:51:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.891 ************************************ 00:04:58.891 END TEST dpdk_mem_utility 00:04:58.891 ************************************ 00:04:58.891 03:51:58 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:58.891 03:51:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.891 03:51:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.891 03:51:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.891 ************************************ 00:04:58.891 START TEST event 00:04:58.891 ************************************ 00:04:58.891 03:51:58 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:58.891 * Looking for test storage... 00:04:58.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:58.891 03:51:58 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.891 03:51:58 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.891 03:51:58 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.150 03:51:58 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.150 03:51:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.150 03:51:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.150 03:51:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.150 03:51:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.150 03:51:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.150 03:51:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.150 03:51:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.150 03:51:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.150 03:51:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.150 03:51:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.150 03:51:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.150 03:51:58 event -- scripts/common.sh@344 -- # case "$op" in 00:04:59.150 03:51:58 event -- scripts/common.sh@345 -- # : 1 00:04:59.150 03:51:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.150 03:51:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.150 03:51:58 event -- scripts/common.sh@365 -- # decimal 1 00:04:59.150 03:51:58 event -- scripts/common.sh@353 -- # local d=1 00:04:59.150 03:51:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.150 03:51:58 event -- scripts/common.sh@355 -- # echo 1 00:04:59.150 03:51:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.150 03:51:58 event -- scripts/common.sh@366 -- # decimal 2 00:04:59.150 03:51:58 event -- scripts/common.sh@353 -- # local d=2 00:04:59.150 03:51:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.150 03:51:58 event -- scripts/common.sh@355 -- # echo 2 00:04:59.150 03:51:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.150 03:51:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.150 03:51:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.150 03:51:58 event -- scripts/common.sh@368 -- # return 0 00:04:59.150 03:51:58 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.150 03:51:58 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.150 --rc genhtml_branch_coverage=1 00:04:59.150 --rc genhtml_function_coverage=1 00:04:59.150 --rc genhtml_legend=1 00:04:59.150 --rc geninfo_all_blocks=1 00:04:59.150 --rc geninfo_unexecuted_blocks=1 00:04:59.150 00:04:59.150 ' 00:04:59.150 03:51:58 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.150 --rc genhtml_branch_coverage=1 00:04:59.150 --rc genhtml_function_coverage=1 00:04:59.150 --rc genhtml_legend=1 00:04:59.150 --rc geninfo_all_blocks=1 00:04:59.150 --rc geninfo_unexecuted_blocks=1 00:04:59.150 00:04:59.150 ' 00:04:59.150 03:51:58 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.150 --rc genhtml_branch_coverage=1 00:04:59.150 --rc genhtml_function_coverage=1 00:04:59.150 --rc genhtml_legend=1 00:04:59.150 --rc geninfo_all_blocks=1 00:04:59.150 --rc geninfo_unexecuted_blocks=1 00:04:59.150 00:04:59.150 ' 00:04:59.150 03:51:58 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.150 --rc genhtml_branch_coverage=1 00:04:59.150 --rc genhtml_function_coverage=1 00:04:59.150 --rc genhtml_legend=1 00:04:59.150 --rc geninfo_all_blocks=1 00:04:59.150 --rc geninfo_unexecuted_blocks=1 00:04:59.150 00:04:59.150 ' 00:04:59.150 03:51:58 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:59.150 03:51:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:59.150 03:51:58 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.150 03:51:58 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:59.150 03:51:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.150 03:51:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.150 ************************************ 00:04:59.150 START TEST event_perf 00:04:59.150 ************************************ 00:04:59.150 03:51:58 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.150 Running I/O for 1 seconds...[2024-12-10 03:51:58.294947] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:04:59.150 [2024-12-10 03:51:58.295016] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065385 ] 00:04:59.150 [2024-12-10 03:51:58.375785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.151 [2024-12-10 03:51:58.417647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.151 [2024-12-10 03:51:58.417759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.151 [2024-12-10 03:51:58.417864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.151 [2024-12-10 03:51:58.417865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.526 Running I/O for 1 seconds... 00:05:00.526 lcore 0: 201388 00:05:00.526 lcore 1: 201390 00:05:00.526 lcore 2: 201388 00:05:00.526 lcore 3: 201388 00:05:00.526 done. 00:05:00.526 00:05:00.526 real 0m1.184s 00:05:00.526 user 0m4.101s 00:05:00.526 sys 0m0.081s 00:05:00.526 03:51:59 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.526 03:51:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.526 ************************************ 00:05:00.526 END TEST event_perf 00:05:00.526 ************************************ 00:05:00.526 03:51:59 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.526 03:51:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:00.526 03:51:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.526 03:51:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.526 ************************************ 00:05:00.526 START TEST event_reactor 00:05:00.526 ************************************ 00:05:00.526 03:51:59 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.526 [2024-12-10 03:51:59.548042] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:00.526 [2024-12-10 03:51:59.548113] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065586 ] 00:05:00.526 [2024-12-10 03:51:59.627297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.526 [2024-12-10 03:51:59.666090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.608 test_start 00:05:01.608 oneshot 00:05:01.608 tick 100 00:05:01.608 tick 100 00:05:01.608 tick 250 00:05:01.608 tick 100 00:05:01.608 tick 100 00:05:01.608 tick 250 00:05:01.608 tick 100 00:05:01.608 tick 500 00:05:01.608 tick 100 00:05:01.608 tick 100 00:05:01.608 tick 250 00:05:01.608 tick 100 00:05:01.608 tick 100 00:05:01.608 test_end 00:05:01.608 00:05:01.608 real 0m1.180s 00:05:01.608 user 0m1.094s 00:05:01.608 sys 0m0.081s 00:05:01.608 03:52:00 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.608 03:52:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:01.608 ************************************ 00:05:01.608 END TEST event_reactor 00:05:01.608 ************************************ 00:05:01.608 03:52:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.608 03:52:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:01.608 03:52:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.608 03:52:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.608 ************************************ 00:05:01.608 START TEST event_reactor_perf 00:05:01.608 ************************************ 00:05:01.608 03:52:00 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.608 [2024-12-10 03:52:00.792006] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:01.608 [2024-12-10 03:52:00.792065] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065829 ] 00:05:01.867 [2024-12-10 03:52:00.871332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.867 [2024-12-10 03:52:00.911100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.802 test_start 00:05:02.802 test_end 00:05:02.802 Performance: 520872 events per second 00:05:02.802 00:05:02.802 real 0m1.172s 00:05:02.802 user 0m1.101s 00:05:02.802 sys 0m0.068s 00:05:02.802 03:52:01 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.802 03:52:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.802 ************************************ 00:05:02.802 END TEST event_reactor_perf 00:05:02.802 ************************************ 00:05:02.802 03:52:01 event -- event/event.sh@49 -- # uname -s 00:05:02.802 03:52:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.802 03:52:01 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:02.802 03:52:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.802 03:52:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.802 03:52:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.802 ************************************ 00:05:02.802 START TEST event_scheduler 00:05:02.802 ************************************ 00:05:02.802 03:52:02 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:03.061 * Looking for test storage... 00:05:03.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:03.061 03:52:02 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.061 03:52:02 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.061 03:52:02 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.061 03:52:02 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.061 03:52:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:03.061 03:52:02 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.061 03:52:02 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.061 --rc genhtml_branch_coverage=1 00:05:03.061 --rc genhtml_function_coverage=1 00:05:03.061 --rc genhtml_legend=1 00:05:03.061 --rc geninfo_all_blocks=1 00:05:03.061 --rc geninfo_unexecuted_blocks=1 00:05:03.061 00:05:03.061 ' 00:05:03.061 03:52:02 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.061 --rc genhtml_branch_coverage=1 00:05:03.061 --rc genhtml_function_coverage=1 00:05:03.061 --rc genhtml_legend=1 00:05:03.061 --rc geninfo_all_blocks=1 00:05:03.061 --rc geninfo_unexecuted_blocks=1 00:05:03.061 00:05:03.061 ' 00:05:03.061 03:52:02 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.061 --rc genhtml_branch_coverage=1 00:05:03.061 --rc genhtml_function_coverage=1 00:05:03.061 --rc genhtml_legend=1 00:05:03.061 --rc geninfo_all_blocks=1 00:05:03.061 --rc geninfo_unexecuted_blocks=1 00:05:03.061 00:05:03.061 ' 00:05:03.061 03:52:02 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.061 --rc genhtml_branch_coverage=1 00:05:03.061 --rc genhtml_function_coverage=1 00:05:03.061 --rc genhtml_legend=1 00:05:03.062 --rc geninfo_all_blocks=1 00:05:03.062 --rc geninfo_unexecuted_blocks=1 00:05:03.062 00:05:03.062 ' 00:05:03.062 03:52:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:03.062 03:52:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4066114 00:05:03.062 03:52:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.062 03:52:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:03.062 03:52:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4066114 00:05:03.062 03:52:02 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 4066114 ']' 00:05:03.062 03:52:02 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.062 03:52:02 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.062 03:52:02 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.062 03:52:02 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.062 03:52:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.062 [2024-12-10 03:52:02.242316] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:03.062 [2024-12-10 03:52:02.242365] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4066114 ] 00:05:03.062 [2024-12-10 03:52:02.318237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.321 [2024-12-10 03:52:02.361337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.321 [2024-12-10 03:52:02.361460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.321 [2024-12-10 03:52:02.361569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.321 [2024-12-10 03:52:02.361570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.321 03:52:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.321 03:52:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:03.321 03:52:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:03.321 03:52:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.321 03:52:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.321 [2024-12-10 03:52:02.422107] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:03.321 [2024-12-10 03:52:02.422124] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:03.321 [2024-12-10 03:52:02.422133] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:03.321 [2024-12-10 03:52:02.422138] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:03.321 [2024-12-10 03:52:02.422143] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:03.321 03:52:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.321 03:52:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:03.321 03:52:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.321 03:52:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.321 [2024-12-10 03:52:02.501164] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:03.321 03:52:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.321 03:52:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:03.321 03:52:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.321 03:52:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.321 03:52:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.321 ************************************ 00:05:03.321 START TEST scheduler_create_thread 00:05:03.321 ************************************ 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.321 2 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.321 3 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.321 4 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.321 5 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.321 6 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.321 7 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.321 8 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.321 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.579 9 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.579 10 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.579 03:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.954 03:52:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.954 03:52:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:04.954 03:52:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:04.954 03:52:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.954 03:52:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.889 03:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.889 00:05:05.889 real 0m2.618s 00:05:05.889 user 0m0.024s 00:05:05.889 sys 0m0.004s 00:05:05.889 03:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.890 03:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.890 ************************************ 00:05:05.890 END TEST scheduler_create_thread 00:05:05.890 ************************************ 00:05:06.147 03:52:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:06.147 03:52:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4066114 00:05:06.147 03:52:05 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 4066114 ']' 00:05:06.147 03:52:05 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 4066114 00:05:06.147 03:52:05 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:06.147 03:52:05 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.147 03:52:05 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4066114 00:05:06.147 03:52:05 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:06.147 03:52:05 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:06.147 03:52:05 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4066114' 00:05:06.147 killing process with pid 4066114 00:05:06.147 03:52:05 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 4066114 00:05:06.147 03:52:05 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 4066114 00:05:06.405 [2024-12-10 03:52:05.635134] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:06.665 00:05:06.665 real 0m3.782s 00:05:06.665 user 0m5.686s 00:05:06.665 sys 0m0.391s 00:05:06.665 03:52:05 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.665 03:52:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.665 ************************************ 00:05:06.665 END TEST event_scheduler 00:05:06.665 ************************************ 00:05:06.665 03:52:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:06.665 03:52:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:06.665 03:52:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.665 03:52:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.665 03:52:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.665 ************************************ 00:05:06.665 START TEST app_repeat 00:05:06.665 ************************************ 00:05:06.665 03:52:05 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4066826 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4066826' 00:05:06.665 Process app_repeat pid: 4066826 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:06.665 spdk_app_start Round 0 00:05:06.665 03:52:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4066826 /var/tmp/spdk-nbd.sock 00:05:06.665 03:52:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4066826 ']' 00:05:06.665 03:52:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.665 03:52:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.665 03:52:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.665 03:52:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.665 03:52:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.665 [2024-12-10 03:52:05.914434] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:06.665 [2024-12-10 03:52:05.914486] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4066826 ] 00:05:06.924 [2024-12-10 03:52:05.990730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.924 [2024-12-10 03:52:06.033646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.924 [2024-12-10 03:52:06.033647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.924 03:52:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.924 03:52:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:06.924 03:52:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.182 Malloc0 00:05:07.182 03:52:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.441 Malloc1 00:05:07.441 03:52:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.441 03:52:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.700 /dev/nbd0 00:05:07.700 03:52:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.700 03:52:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.700 1+0 records in 00:05:07.700 1+0 records out 00:05:07.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187607 s, 21.8 MB/s 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:07.700 03:52:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:07.700 03:52:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.700 03:52:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.700 03:52:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.958 /dev/nbd1 00:05:07.959 03:52:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.959 03:52:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.959 1+0 records in 00:05:07.959 1+0 records out 00:05:07.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196303 s, 20.9 MB/s 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:07.959 03:52:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:07.959 03:52:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.959 03:52:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.959 03:52:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.959 03:52:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.959 03:52:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.217 { 00:05:08.217 "nbd_device": "/dev/nbd0", 00:05:08.217 "bdev_name": "Malloc0" 00:05:08.217 }, 00:05:08.217 { 00:05:08.217 "nbd_device": "/dev/nbd1", 00:05:08.217 "bdev_name": "Malloc1" 00:05:08.217 } 00:05:08.217 ]' 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.217 { 00:05:08.217 "nbd_device": "/dev/nbd0", 00:05:08.217 "bdev_name": "Malloc0" 00:05:08.217 }, 00:05:08.217 { 00:05:08.217 "nbd_device": "/dev/nbd1", 00:05:08.217 "bdev_name": "Malloc1" 00:05:08.217 } 00:05:08.217 ]' 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.217 /dev/nbd1' 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.217 /dev/nbd1' 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.217 256+0 records in 00:05:08.217 256+0 records out 00:05:08.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106512 s, 98.4 MB/s 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.217 256+0 records in 00:05:08.217 256+0 records out 00:05:08.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136961 s, 76.6 MB/s 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.217 256+0 records in 00:05:08.217 256+0 records out 00:05:08.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014724 s, 71.2 MB/s 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.217 03:52:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.218 03:52:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.476 03:52:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.476 03:52:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.476 03:52:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.476 03:52:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.476 03:52:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.476 03:52:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.476 03:52:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.476 03:52:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.476 03:52:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.476 03:52:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.852 03:52:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.853 03:52:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.853 03:52:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.853 03:52:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.853 03:52:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.853 03:52:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.853 03:52:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.853 03:52:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.853 03:52:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.853 03:52:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.853 03:52:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.853 03:52:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.111 03:52:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.111 [2024-12-10 03:52:08.373511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.370 [2024-12-10 03:52:08.410503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.370 [2024-12-10 03:52:08.410503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.370 [2024-12-10 03:52:08.450824] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.370 [2024-12-10 03:52:08.450863] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.656 03:52:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.657 03:52:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:12.657 spdk_app_start Round 1 00:05:12.657 03:52:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4066826 /var/tmp/spdk-nbd.sock 00:05:12.657 03:52:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4066826 ']' 00:05:12.657 03:52:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.657 03:52:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.657 03:52:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.657 03:52:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.657 03:52:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.657 03:52:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.657 03:52:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:12.657 03:52:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.657 Malloc0 00:05:12.657 03:52:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.657 Malloc1 00:05:12.657 03:52:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.657 03:52:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.915 /dev/nbd0 00:05:12.915 03:52:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.915 03:52:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.915 1+0 records in 00:05:12.915 1+0 records out 00:05:12.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024241 s, 16.9 MB/s 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.915 03:52:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.915 03:52:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.915 03:52:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.915 03:52:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.174 /dev/nbd1 00:05:13.174 03:52:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.174 03:52:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.174 1+0 records in 00:05:13.174 1+0 records out 00:05:13.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196137 s, 20.9 MB/s 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:13.174 03:52:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:13.174 03:52:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.174 03:52:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.174 03:52:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.174 03:52:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.174 03:52:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.433 { 00:05:13.433 "nbd_device": "/dev/nbd0", 00:05:13.433 "bdev_name": "Malloc0" 00:05:13.433 }, 00:05:13.433 { 00:05:13.433 "nbd_device": "/dev/nbd1", 00:05:13.433 "bdev_name": "Malloc1" 00:05:13.433 } 00:05:13.433 ]' 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.433 { 00:05:13.433 "nbd_device": "/dev/nbd0", 00:05:13.433 "bdev_name": "Malloc0" 00:05:13.433 }, 00:05:13.433 { 00:05:13.433 "nbd_device": "/dev/nbd1", 00:05:13.433 "bdev_name": "Malloc1" 00:05:13.433 } 00:05:13.433 ]' 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.433 /dev/nbd1' 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.433 /dev/nbd1' 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.433 256+0 records in 00:05:13.433 256+0 records out 00:05:13.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106133 s, 98.8 MB/s 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.433 256+0 records in 00:05:13.433 256+0 records out 00:05:13.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143279 s, 73.2 MB/s 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.433 256+0 records in 00:05:13.433 256+0 records out 00:05:13.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156257 s, 67.1 MB/s 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.433 03:52:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.692 03:52:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.692 03:52:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.692 03:52:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.692 03:52:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.692 03:52:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.692 03:52:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.692 03:52:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.692 03:52:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.692 03:52:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.692 03:52:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.951 03:52:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.951 03:52:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.951 03:52:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.951 03:52:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.951 03:52:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.951 03:52:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.951 03:52:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.951 03:52:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.951 03:52:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.951 03:52:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.951 03:52:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.209 03:52:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.209 03:52:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.468 03:52:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.468 [2024-12-10 03:52:13.724751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.727 [2024-12-10 03:52:13.761944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.727 [2024-12-10 03:52:13.761945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.727 [2024-12-10 03:52:13.803087] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.727 [2024-12-10 03:52:13.803124] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.014 03:52:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:18.014 03:52:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:18.014 spdk_app_start Round 2 00:05:18.014 03:52:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4066826 /var/tmp/spdk-nbd.sock 00:05:18.014 03:52:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4066826 ']' 00:05:18.014 03:52:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.014 03:52:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.014 03:52:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.014 03:52:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.014 03:52:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.014 03:52:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.014 03:52:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:18.014 03:52:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.014 Malloc0 00:05:18.014 03:52:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.014 Malloc1 00:05:18.014 03:52:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.014 03:52:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.273 /dev/nbd0 00:05:18.273 03:52:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.273 03:52:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.273 1+0 records in 00:05:18.273 1+0 records out 00:05:18.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238959 s, 17.1 MB/s 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.273 03:52:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.273 03:52:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.273 03:52:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.273 03:52:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.532 /dev/nbd1 00:05:18.532 03:52:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.532 03:52:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.532 1+0 records in 00:05:18.532 1+0 records out 00:05:18.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227379 s, 18.0 MB/s 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.532 03:52:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.532 03:52:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.532 03:52:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.532 03:52:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.532 03:52:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.532 03:52:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.791 { 00:05:18.791 "nbd_device": "/dev/nbd0", 00:05:18.791 "bdev_name": "Malloc0" 00:05:18.791 }, 00:05:18.791 { 00:05:18.791 "nbd_device": "/dev/nbd1", 00:05:18.791 "bdev_name": "Malloc1" 00:05:18.791 } 00:05:18.791 ]' 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.791 { 00:05:18.791 "nbd_device": "/dev/nbd0", 00:05:18.791 "bdev_name": "Malloc0" 00:05:18.791 }, 00:05:18.791 { 00:05:18.791 "nbd_device": "/dev/nbd1", 00:05:18.791 "bdev_name": "Malloc1" 00:05:18.791 } 00:05:18.791 ]' 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.791 /dev/nbd1' 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.791 /dev/nbd1' 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.791 256+0 records in 00:05:18.791 256+0 records out 00:05:18.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106749 s, 98.2 MB/s 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.791 256+0 records in 00:05:18.791 256+0 records out 00:05:18.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015013 s, 69.8 MB/s 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.791 256+0 records in 00:05:18.791 256+0 records out 00:05:18.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148722 s, 70.5 MB/s 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.791 03:52:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.049 03:52:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.049 03:52:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.049 03:52:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.049 03:52:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.049 03:52:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.049 03:52:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.049 03:52:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.049 03:52:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.049 03:52:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.049 03:52:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.308 03:52:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.308 03:52:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.308 03:52:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.308 03:52:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.308 03:52:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.308 03:52:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.308 03:52:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.308 03:52:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.308 03:52:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.308 03:52:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.308 03:52:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.566 03:52:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.566 03:52:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.825 03:52:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.825 [2024-12-10 03:52:19.035896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.825 [2024-12-10 03:52:19.071703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.825 [2024-12-10 03:52:19.071704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.084 [2024-12-10 03:52:19.112440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.084 [2024-12-10 03:52:19.112471] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.371 03:52:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4066826 /var/tmp/spdk-nbd.sock 00:05:23.371 03:52:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4066826 ']' 00:05:23.371 03:52:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.371 03:52:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.371 03:52:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.371 03:52:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.371 03:52:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:23.371 03:52:22 event.app_repeat -- event/event.sh@39 -- # killprocess 4066826 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 4066826 ']' 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 4066826 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4066826 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4066826' 00:05:23.371 killing process with pid 4066826 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 4066826 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 4066826 00:05:23.371 spdk_app_start is called in Round 0. 00:05:23.371 Shutdown signal received, stop current app iteration 00:05:23.371 Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 reinitialization... 00:05:23.371 spdk_app_start is called in Round 1. 00:05:23.371 Shutdown signal received, stop current app iteration 00:05:23.371 Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 reinitialization... 00:05:23.371 spdk_app_start is called in Round 2. 00:05:23.371 Shutdown signal received, stop current app iteration 00:05:23.371 Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 reinitialization... 00:05:23.371 spdk_app_start is called in Round 3. 00:05:23.371 Shutdown signal received, stop current app iteration 00:05:23.371 03:52:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:23.371 03:52:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:23.371 00:05:23.371 real 0m16.408s 00:05:23.371 user 0m36.099s 00:05:23.371 sys 0m2.509s 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.371 03:52:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.371 ************************************ 00:05:23.371 END TEST app_repeat 00:05:23.371 ************************************ 00:05:23.371 03:52:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:23.371 03:52:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:23.371 03:52:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.371 03:52:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.371 03:52:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.371 ************************************ 00:05:23.371 START TEST cpu_locks 00:05:23.371 ************************************ 00:05:23.371 03:52:22 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:23.371 * Looking for test storage... 00:05:23.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:23.371 03:52:22 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.371 03:52:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.371 03:52:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.371 03:52:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.371 03:52:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.371 03:52:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.371 03:52:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.371 03:52:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.371 03:52:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.371 03:52:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.371 03:52:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.371 03:52:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.371 03:52:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.372 03:52:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:23.372 03:52:22 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.372 03:52:22 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.372 --rc genhtml_branch_coverage=1 00:05:23.372 --rc genhtml_function_coverage=1 00:05:23.372 --rc genhtml_legend=1 00:05:23.372 --rc geninfo_all_blocks=1 00:05:23.372 --rc geninfo_unexecuted_blocks=1 00:05:23.372 00:05:23.372 ' 00:05:23.372 03:52:22 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.372 --rc genhtml_branch_coverage=1 00:05:23.372 --rc genhtml_function_coverage=1 00:05:23.372 --rc genhtml_legend=1 00:05:23.372 --rc geninfo_all_blocks=1 00:05:23.372 --rc geninfo_unexecuted_blocks=1 00:05:23.372 00:05:23.372 ' 00:05:23.372 03:52:22 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.372 --rc genhtml_branch_coverage=1 00:05:23.372 --rc genhtml_function_coverage=1 00:05:23.372 --rc genhtml_legend=1 00:05:23.372 --rc geninfo_all_blocks=1 00:05:23.372 --rc geninfo_unexecuted_blocks=1 00:05:23.372 00:05:23.372 ' 00:05:23.372 03:52:22 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.372 --rc genhtml_branch_coverage=1 00:05:23.372 --rc genhtml_function_coverage=1 00:05:23.372 --rc genhtml_legend=1 00:05:23.372 --rc geninfo_all_blocks=1 00:05:23.372 --rc geninfo_unexecuted_blocks=1 00:05:23.372 00:05:23.372 ' 00:05:23.372 03:52:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:23.372 03:52:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:23.372 03:52:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:23.372 03:52:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:23.372 03:52:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.372 03:52:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.372 03:52:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.372 ************************************ 00:05:23.372 START TEST default_locks 00:05:23.372 ************************************ 00:05:23.372 03:52:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:23.372 03:52:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4069755 00:05:23.372 03:52:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4069755 00:05:23.372 03:52:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.372 03:52:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4069755 ']' 00:05:23.372 03:52:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.372 03:52:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.372 03:52:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.372 03:52:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.372 03:52:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.372 [2024-12-10 03:52:22.617706] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:23.372 [2024-12-10 03:52:22.617750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069755 ] 00:05:23.631 [2024-12-10 03:52:22.694147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.631 [2024-12-10 03:52:22.736555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.890 03:52:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.890 03:52:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:23.890 03:52:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4069755 00:05:23.890 03:52:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4069755 00:05:23.890 03:52:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.457 lslocks: write error 00:05:24.457 03:52:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4069755 00:05:24.457 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 4069755 ']' 00:05:24.457 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 4069755 00:05:24.457 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:24.457 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.457 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4069755 00:05:24.457 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.457 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.457 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4069755' 00:05:24.457 killing process with pid 4069755 00:05:24.457 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 4069755 00:05:24.457 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 4069755 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4069755 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4069755 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 4069755 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4069755 ']' 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.715 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4069755) - No such process 00:05:24.716 ERROR: process (pid: 4069755) is no longer running 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.716 00:05:24.716 real 0m1.258s 00:05:24.716 user 0m1.214s 00:05:24.716 sys 0m0.562s 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.716 03:52:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.716 ************************************ 00:05:24.716 END TEST default_locks 00:05:24.716 ************************************ 00:05:24.716 03:52:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:24.716 03:52:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.716 03:52:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.716 03:52:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.716 ************************************ 00:05:24.716 START TEST default_locks_via_rpc 00:05:24.716 ************************************ 00:05:24.716 03:52:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:24.716 03:52:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4070027 00:05:24.716 03:52:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4070027 00:05:24.716 03:52:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.716 03:52:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4070027 ']' 00:05:24.716 03:52:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.716 03:52:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.716 03:52:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.716 03:52:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.716 03:52:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.716 [2024-12-10 03:52:23.943149] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:24.716 [2024-12-10 03:52:23.943199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070027 ] 00:05:24.975 [2024-12-10 03:52:24.017305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.975 [2024-12-10 03:52:24.055285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4070027 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4070027 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4070027 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 4070027 ']' 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 4070027 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.234 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4070027 00:05:25.493 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.493 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.493 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4070027' 00:05:25.493 killing process with pid 4070027 00:05:25.493 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 4070027 00:05:25.493 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 4070027 00:05:25.752 00:05:25.752 real 0m0.948s 00:05:25.752 user 0m0.893s 00:05:25.752 sys 0m0.444s 00:05:25.752 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.752 03:52:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.752 ************************************ 00:05:25.752 END TEST default_locks_via_rpc 00:05:25.752 ************************************ 00:05:25.752 03:52:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:25.752 03:52:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.752 03:52:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.752 03:52:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.752 ************************************ 00:05:25.752 START TEST non_locking_app_on_locked_coremask 00:05:25.752 ************************************ 00:05:25.752 03:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:25.752 03:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4070277 00:05:25.753 03:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4070277 /var/tmp/spdk.sock 00:05:25.753 03:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.753 03:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4070277 ']' 00:05:25.753 03:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.753 03:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.753 03:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.753 03:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.753 03:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.753 [2024-12-10 03:52:24.957832] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:25.753 [2024-12-10 03:52:24.957870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070277 ] 00:05:25.753 [2024-12-10 03:52:25.029266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.011 [2024-12-10 03:52:25.067955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.011 03:52:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.011 03:52:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.011 03:52:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4070281 00:05:26.011 03:52:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4070281 /var/tmp/spdk2.sock 00:05:26.011 03:52:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:26.011 03:52:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4070281 ']' 00:05:26.011 03:52:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.011 03:52:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.011 03:52:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.011 03:52:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.011 03:52:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.269 [2024-12-10 03:52:25.342774] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:26.269 [2024-12-10 03:52:25.342819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070281 ] 00:05:26.269 [2024-12-10 03:52:25.427950] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.269 [2024-12-10 03:52:25.427972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.269 [2024-12-10 03:52:25.507275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.204 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.204 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.204 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4070277 00:05:27.204 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.204 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4070277 00:05:27.204 lslocks: write error 00:05:27.204 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4070277 00:05:27.204 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4070277 ']' 00:05:27.204 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4070277 00:05:27.204 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:27.204 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.204 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4070277 00:05:27.464 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.464 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.464 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4070277' 00:05:27.464 killing process with pid 4070277 00:05:27.464 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4070277 00:05:27.464 03:52:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4070277 00:05:28.032 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4070281 00:05:28.032 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4070281 ']' 00:05:28.032 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4070281 00:05:28.032 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.032 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.032 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4070281 00:05:28.032 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.032 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.032 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4070281' 00:05:28.032 killing process with pid 4070281 00:05:28.032 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4070281 00:05:28.032 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4070281 00:05:28.291 00:05:28.291 real 0m2.523s 00:05:28.291 user 0m2.683s 00:05:28.291 sys 0m0.802s 00:05:28.291 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.291 03:52:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.291 ************************************ 00:05:28.291 END TEST non_locking_app_on_locked_coremask 00:05:28.291 ************************************ 00:05:28.291 03:52:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:28.291 03:52:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.291 03:52:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.291 03:52:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.291 ************************************ 00:05:28.291 START TEST locking_app_on_unlocked_coremask 00:05:28.291 ************************************ 00:05:28.291 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:28.291 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4070756 00:05:28.291 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4070756 /var/tmp/spdk.sock 00:05:28.291 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:28.291 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4070756 ']' 00:05:28.291 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.291 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.291 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.291 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.291 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.291 [2024-12-10 03:52:27.547175] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:28.291 [2024-12-10 03:52:27.547218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070756 ] 00:05:28.550 [2024-12-10 03:52:27.617721] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.550 [2024-12-10 03:52:27.617750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.550 [2024-12-10 03:52:27.653194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.809 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.809 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.809 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4070765 00:05:28.809 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4070765 /var/tmp/spdk2.sock 00:05:28.809 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:28.809 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4070765 ']' 00:05:28.809 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.809 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.809 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.809 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.809 03:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.809 [2024-12-10 03:52:27.925135] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:28.809 [2024-12-10 03:52:27.925191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070765 ] 00:05:28.809 [2024-12-10 03:52:28.013502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.809 [2024-12-10 03:52:28.088977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.746 03:52:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.746 03:52:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:29.746 03:52:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4070765 00:05:29.746 03:52:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4070765 00:05:29.746 03:52:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.313 lslocks: write error 00:05:30.313 03:52:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4070756 00:05:30.313 03:52:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4070756 ']' 00:05:30.313 03:52:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4070756 00:05:30.313 03:52:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:30.313 03:52:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.313 03:52:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4070756 00:05:30.313 03:52:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.313 03:52:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.313 03:52:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4070756' 00:05:30.313 killing process with pid 4070756 00:05:30.313 03:52:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4070756 00:05:30.313 03:52:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4070756 00:05:30.886 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4070765 00:05:30.886 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4070765 ']' 00:05:30.886 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4070765 00:05:30.886 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:30.886 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.886 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4070765 00:05:30.886 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.886 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.886 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4070765' 00:05:30.886 killing process with pid 4070765 00:05:30.886 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4070765 00:05:30.886 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4070765 00:05:31.145 00:05:31.145 real 0m2.899s 00:05:31.145 user 0m3.037s 00:05:31.145 sys 0m0.968s 00:05:31.145 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.145 03:52:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.145 ************************************ 00:05:31.145 END TEST locking_app_on_unlocked_coremask 00:05:31.145 ************************************ 00:05:31.145 03:52:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:31.145 03:52:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.404 03:52:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.404 03:52:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.404 ************************************ 00:05:31.404 START TEST locking_app_on_locked_coremask 00:05:31.404 ************************************ 00:05:31.404 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:31.404 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4071248 00:05:31.404 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4071248 /var/tmp/spdk.sock 00:05:31.404 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.404 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4071248 ']' 00:05:31.404 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.404 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.404 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.404 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.404 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.404 [2024-12-10 03:52:30.516701] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:31.404 [2024-12-10 03:52:30.516739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071248 ] 00:05:31.404 [2024-12-10 03:52:30.588624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.404 [2024-12-10 03:52:30.628144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4071335 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4071335 /var/tmp/spdk2.sock 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4071335 /var/tmp/spdk2.sock 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4071335 /var/tmp/spdk2.sock 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4071335 ']' 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.663 03:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.663 [2024-12-10 03:52:30.913329] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:31.663 [2024-12-10 03:52:30.913377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071335 ] 00:05:31.922 [2024-12-10 03:52:31.001685] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4071248 has claimed it. 00:05:31.922 [2024-12-10 03:52:31.001721] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:32.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4071335) - No such process 00:05:32.489 ERROR: process (pid: 4071335) is no longer running 00:05:32.489 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.489 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:32.489 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:32.489 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.489 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.489 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.489 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4071248 00:05:32.489 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4071248 00:05:32.489 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.748 lslocks: write error 00:05:32.748 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4071248 00:05:32.748 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4071248 ']' 00:05:32.748 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4071248 00:05:32.748 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.748 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.748 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4071248 00:05:32.748 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.748 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.748 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4071248' 00:05:32.748 killing process with pid 4071248 00:05:32.748 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4071248 00:05:32.748 03:52:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4071248 00:05:33.007 00:05:33.007 real 0m1.702s 00:05:33.007 user 0m1.837s 00:05:33.007 sys 0m0.538s 00:05:33.007 03:52:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.007 03:52:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.007 ************************************ 00:05:33.007 END TEST locking_app_on_locked_coremask 00:05:33.007 ************************************ 00:05:33.007 03:52:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:33.007 03:52:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.007 03:52:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.007 03:52:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.007 ************************************ 00:05:33.007 START TEST locking_overlapped_coremask 00:05:33.007 ************************************ 00:05:33.007 03:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:33.007 03:52:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4071601 00:05:33.007 03:52:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4071601 /var/tmp/spdk.sock 00:05:33.007 03:52:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:33.007 03:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4071601 ']' 00:05:33.007 03:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.007 03:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.007 03:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.007 03:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.007 03:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.007 [2024-12-10 03:52:32.282968] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:33.007 [2024-12-10 03:52:32.283011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071601 ] 00:05:33.266 [2024-12-10 03:52:32.359881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.266 [2024-12-10 03:52:32.405120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.266 [2024-12-10 03:52:32.405146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.266 [2024-12-10 03:52:32.405146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.834 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.834 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:33.834 03:52:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:33.834 03:52:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4071738 00:05:33.834 03:52:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4071738 /var/tmp/spdk2.sock 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4071738 /var/tmp/spdk2.sock 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4071738 /var/tmp/spdk2.sock 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4071738 ']' 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.093 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.093 [2024-12-10 03:52:33.168393] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:34.093 [2024-12-10 03:52:33.168441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071738 ] 00:05:34.093 [2024-12-10 03:52:33.258250] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4071601 has claimed it. 00:05:34.093 [2024-12-10 03:52:33.258289] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4071738) - No such process 00:05:34.661 ERROR: process (pid: 4071738) is no longer running 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4071601 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 4071601 ']' 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 4071601 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4071601 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4071601' 00:05:34.661 killing process with pid 4071601 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 4071601 00:05:34.661 03:52:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 4071601 00:05:34.920 00:05:34.920 real 0m1.929s 00:05:34.920 user 0m5.531s 00:05:34.920 sys 0m0.440s 00:05:34.920 03:52:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.920 03:52:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.920 ************************************ 00:05:34.920 END TEST locking_overlapped_coremask 00:05:34.920 ************************************ 00:05:34.920 03:52:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:34.920 03:52:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.920 03:52:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.920 03:52:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.179 ************************************ 00:05:35.179 START TEST locking_overlapped_coremask_via_rpc 00:05:35.179 ************************************ 00:05:35.179 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:35.179 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4071988 00:05:35.179 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4071988 /var/tmp/spdk.sock 00:05:35.179 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:35.179 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4071988 ']' 00:05:35.179 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.179 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.179 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.179 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.179 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.179 [2024-12-10 03:52:34.277406] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:35.179 [2024-12-10 03:52:34.277458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071988 ] 00:05:35.179 [2024-12-10 03:52:34.351644] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.179 [2024-12-10 03:52:34.351673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.179 [2024-12-10 03:52:34.390219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.179 [2024-12-10 03:52:34.390327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.179 [2024-12-10 03:52:34.390328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.438 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.438 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:35.438 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4071996 00:05:35.438 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:35.438 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4071996 /var/tmp/spdk2.sock 00:05:35.438 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4071996 ']' 00:05:35.438 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.438 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.438 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.438 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.438 03:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.438 [2024-12-10 03:52:34.659348] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:35.438 [2024-12-10 03:52:34.659394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071996 ] 00:05:35.697 [2024-12-10 03:52:34.750097] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.697 [2024-12-10 03:52:34.750127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.697 [2024-12-10 03:52:34.832136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.697 [2024-12-10 03:52:34.832254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.697 [2024-12-10 03:52:34.832255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.266 [2024-12-10 03:52:35.498243] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4071988 has claimed it. 00:05:36.266 request: 00:05:36.266 { 00:05:36.266 "method": "framework_enable_cpumask_locks", 00:05:36.266 "req_id": 1 00:05:36.266 } 00:05:36.266 Got JSON-RPC error response 00:05:36.266 response: 00:05:36.266 { 00:05:36.266 "code": -32603, 00:05:36.266 "message": "Failed to claim CPU core: 2" 00:05:36.266 } 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4071988 /var/tmp/spdk.sock 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4071988 ']' 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.266 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.525 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.525 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.525 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4071996 /var/tmp/spdk2.sock 00:05:36.525 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4071996 ']' 00:05:36.525 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.525 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.525 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.525 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.525 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.784 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.784 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.784 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:36.784 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.784 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.784 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.784 00:05:36.784 real 0m1.673s 00:05:36.784 user 0m0.806s 00:05:36.784 sys 0m0.128s 00:05:36.784 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.784 03:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.784 ************************************ 00:05:36.784 END TEST locking_overlapped_coremask_via_rpc 00:05:36.784 ************************************ 00:05:36.784 03:52:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:36.784 03:52:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4071988 ]] 00:05:36.784 03:52:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4071988 00:05:36.784 03:52:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4071988 ']' 00:05:36.784 03:52:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4071988 00:05:36.784 03:52:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:36.784 03:52:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.784 03:52:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4071988 00:05:36.784 03:52:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.784 03:52:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.784 03:52:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4071988' 00:05:36.784 killing process with pid 4071988 00:05:36.784 03:52:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4071988 00:05:36.784 03:52:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4071988 00:05:37.043 03:52:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4071996 ]] 00:05:37.043 03:52:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4071996 00:05:37.043 03:52:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4071996 ']' 00:05:37.043 03:52:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4071996 00:05:37.043 03:52:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:37.043 03:52:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.043 03:52:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4071996 00:05:37.302 03:52:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:37.302 03:52:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:37.302 03:52:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4071996' 00:05:37.302 killing process with pid 4071996 00:05:37.302 03:52:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4071996 00:05:37.302 03:52:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4071996 00:05:37.561 03:52:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:37.561 03:52:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:37.561 03:52:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4071988 ]] 00:05:37.561 03:52:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4071988 00:05:37.561 03:52:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4071988 ']' 00:05:37.561 03:52:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4071988 00:05:37.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4071988) - No such process 00:05:37.561 03:52:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4071988 is not found' 00:05:37.561 Process with pid 4071988 is not found 00:05:37.561 03:52:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4071996 ]] 00:05:37.561 03:52:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4071996 00:05:37.561 03:52:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4071996 ']' 00:05:37.561 03:52:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4071996 00:05:37.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4071996) - No such process 00:05:37.561 03:52:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4071996 is not found' 00:05:37.561 Process with pid 4071996 is not found 00:05:37.561 03:52:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:37.561 00:05:37.561 real 0m14.293s 00:05:37.561 user 0m25.569s 00:05:37.561 sys 0m4.817s 00:05:37.561 03:52:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.561 03:52:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.561 ************************************ 00:05:37.561 END TEST cpu_locks 00:05:37.561 ************************************ 00:05:37.561 00:05:37.561 real 0m38.628s 00:05:37.561 user 1m13.912s 00:05:37.561 sys 0m8.331s 00:05:37.561 03:52:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.561 03:52:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.561 ************************************ 00:05:37.561 END TEST event 00:05:37.561 ************************************ 00:05:37.561 03:52:36 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:37.561 03:52:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.561 03:52:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.561 03:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:37.561 ************************************ 00:05:37.561 START TEST thread 00:05:37.561 ************************************ 00:05:37.561 03:52:36 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:37.561 * Looking for test storage... 00:05:37.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:37.820 03:52:36 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.820 03:52:36 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.820 03:52:36 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.820 03:52:36 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.820 03:52:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.820 03:52:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.820 03:52:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.820 03:52:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.820 03:52:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.820 03:52:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.820 03:52:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.820 03:52:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.820 03:52:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.820 03:52:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.820 03:52:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.820 03:52:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:37.820 03:52:36 thread -- scripts/common.sh@345 -- # : 1 00:05:37.820 03:52:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.820 03:52:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.820 03:52:36 thread -- scripts/common.sh@365 -- # decimal 1 00:05:37.820 03:52:36 thread -- scripts/common.sh@353 -- # local d=1 00:05:37.820 03:52:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.820 03:52:36 thread -- scripts/common.sh@355 -- # echo 1 00:05:37.820 03:52:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.820 03:52:36 thread -- scripts/common.sh@366 -- # decimal 2 00:05:37.820 03:52:36 thread -- scripts/common.sh@353 -- # local d=2 00:05:37.820 03:52:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.820 03:52:36 thread -- scripts/common.sh@355 -- # echo 2 00:05:37.820 03:52:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.820 03:52:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.820 03:52:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.820 03:52:36 thread -- scripts/common.sh@368 -- # return 0 00:05:37.820 03:52:36 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.820 03:52:36 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.820 --rc genhtml_branch_coverage=1 00:05:37.820 --rc genhtml_function_coverage=1 00:05:37.820 --rc genhtml_legend=1 00:05:37.820 --rc geninfo_all_blocks=1 00:05:37.820 --rc geninfo_unexecuted_blocks=1 00:05:37.820 00:05:37.820 ' 00:05:37.820 03:52:36 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.820 --rc genhtml_branch_coverage=1 00:05:37.820 --rc genhtml_function_coverage=1 00:05:37.820 --rc genhtml_legend=1 00:05:37.820 --rc geninfo_all_blocks=1 00:05:37.820 --rc geninfo_unexecuted_blocks=1 00:05:37.820 00:05:37.820 ' 00:05:37.820 03:52:36 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.820 --rc genhtml_branch_coverage=1 00:05:37.820 --rc genhtml_function_coverage=1 00:05:37.820 --rc genhtml_legend=1 00:05:37.820 --rc geninfo_all_blocks=1 00:05:37.820 --rc geninfo_unexecuted_blocks=1 00:05:37.820 00:05:37.820 ' 00:05:37.820 03:52:36 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.820 --rc genhtml_branch_coverage=1 00:05:37.820 --rc genhtml_function_coverage=1 00:05:37.820 --rc genhtml_legend=1 00:05:37.820 --rc geninfo_all_blocks=1 00:05:37.820 --rc geninfo_unexecuted_blocks=1 00:05:37.820 00:05:37.820 ' 00:05:37.820 03:52:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:37.820 03:52:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:37.821 03:52:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.821 03:52:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.821 ************************************ 00:05:37.821 START TEST thread_poller_perf 00:05:37.821 ************************************ 00:05:37.821 03:52:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:37.821 [2024-12-10 03:52:36.986445] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:37.821 [2024-12-10 03:52:36.986515] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072543 ] 00:05:37.821 [2024-12-10 03:52:37.066082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.079 [2024-12-10 03:52:37.104893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.079 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:39.016 [2024-12-10T02:52:38.302Z] ====================================== 00:05:39.016 [2024-12-10T02:52:38.302Z] busy:2108992286 (cyc) 00:05:39.016 [2024-12-10T02:52:38.302Z] total_run_count: 417000 00:05:39.016 [2024-12-10T02:52:38.302Z] tsc_hz: 2100000000 (cyc) 00:05:39.016 [2024-12-10T02:52:38.302Z] ====================================== 00:05:39.016 [2024-12-10T02:52:38.302Z] poller_cost: 5057 (cyc), 2408 (nsec) 00:05:39.016 00:05:39.016 real 0m1.186s 00:05:39.016 user 0m1.114s 00:05:39.016 sys 0m0.068s 00:05:39.016 03:52:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.016 03:52:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.016 ************************************ 00:05:39.016 END TEST thread_poller_perf 00:05:39.016 ************************************ 00:05:39.016 03:52:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:39.016 03:52:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:39.016 03:52:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.016 03:52:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.016 ************************************ 00:05:39.016 START TEST thread_poller_perf 00:05:39.016 ************************************ 00:05:39.016 03:52:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:39.016 [2024-12-10 03:52:38.240009] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:39.016 [2024-12-10 03:52:38.240074] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072790 ] 00:05:39.275 [2024-12-10 03:52:38.319286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.275 [2024-12-10 03:52:38.358336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.275 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:40.212 [2024-12-10T02:52:39.498Z] ====================================== 00:05:40.212 [2024-12-10T02:52:39.498Z] busy:2101543722 (cyc) 00:05:40.212 [2024-12-10T02:52:39.498Z] total_run_count: 5065000 00:05:40.212 [2024-12-10T02:52:39.498Z] tsc_hz: 2100000000 (cyc) 00:05:40.212 [2024-12-10T02:52:39.498Z] ====================================== 00:05:40.212 [2024-12-10T02:52:39.498Z] poller_cost: 414 (cyc), 197 (nsec) 00:05:40.212 00:05:40.212 real 0m1.177s 00:05:40.212 user 0m1.094s 00:05:40.212 sys 0m0.078s 00:05:40.212 03:52:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.212 03:52:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.212 ************************************ 00:05:40.212 END TEST thread_poller_perf 00:05:40.212 ************************************ 00:05:40.212 03:52:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:40.212 00:05:40.212 real 0m2.673s 00:05:40.212 user 0m2.368s 00:05:40.212 sys 0m0.318s 00:05:40.212 03:52:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.212 03:52:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.212 ************************************ 00:05:40.212 END TEST thread 00:05:40.212 ************************************ 00:05:40.212 03:52:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:40.212 03:52:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:40.212 03:52:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.212 03:52:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.212 03:52:39 -- common/autotest_common.sh@10 -- # set +x 00:05:40.471 ************************************ 00:05:40.471 START TEST app_cmdline 00:05:40.471 ************************************ 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:40.472 * Looking for test storage... 00:05:40.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.472 03:52:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.472 --rc genhtml_branch_coverage=1 00:05:40.472 --rc genhtml_function_coverage=1 00:05:40.472 --rc genhtml_legend=1 00:05:40.472 --rc geninfo_all_blocks=1 00:05:40.472 --rc geninfo_unexecuted_blocks=1 00:05:40.472 00:05:40.472 ' 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.472 --rc genhtml_branch_coverage=1 00:05:40.472 --rc genhtml_function_coverage=1 00:05:40.472 --rc genhtml_legend=1 00:05:40.472 --rc geninfo_all_blocks=1 00:05:40.472 --rc geninfo_unexecuted_blocks=1 00:05:40.472 00:05:40.472 ' 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.472 --rc genhtml_branch_coverage=1 00:05:40.472 --rc genhtml_function_coverage=1 00:05:40.472 --rc genhtml_legend=1 00:05:40.472 --rc geninfo_all_blocks=1 00:05:40.472 --rc geninfo_unexecuted_blocks=1 00:05:40.472 00:05:40.472 ' 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.472 --rc genhtml_branch_coverage=1 00:05:40.472 --rc genhtml_function_coverage=1 00:05:40.472 --rc genhtml_legend=1 00:05:40.472 --rc geninfo_all_blocks=1 00:05:40.472 --rc geninfo_unexecuted_blocks=1 00:05:40.472 00:05:40.472 ' 00:05:40.472 03:52:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:40.472 03:52:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4073079 00:05:40.472 03:52:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4073079 00:05:40.472 03:52:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 4073079 ']' 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.472 03:52:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.472 [2024-12-10 03:52:39.729843] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:40.472 [2024-12-10 03:52:39.729892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4073079 ] 00:05:40.731 [2024-12-10 03:52:39.805108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.731 [2024-12-10 03:52:39.845304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.990 03:52:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.990 03:52:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:40.990 03:52:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:40.990 { 00:05:40.990 "version": "SPDK v25.01-pre git sha1 1ae735a5d", 00:05:40.990 "fields": { 00:05:40.990 "major": 25, 00:05:40.990 "minor": 1, 00:05:40.990 "patch": 0, 00:05:40.990 "suffix": "-pre", 00:05:40.990 "commit": "1ae735a5d" 00:05:40.990 } 00:05:40.990 } 00:05:40.990 03:52:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:40.990 03:52:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:40.990 03:52:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:40.990 03:52:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:40.990 03:52:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:40.990 03:52:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:40.990 03:52:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:40.990 03:52:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.990 03:52:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.990 03:52:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.248 03:52:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:41.248 03:52:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:41.248 03:52:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:41.248 03:52:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:41.248 03:52:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:41.248 03:52:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:41.248 03:52:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.248 03:52:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:41.248 03:52:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.248 03:52:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:41.248 03:52:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.248 03:52:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:41.248 03:52:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:41.248 03:52:40 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:41.248 request: 00:05:41.248 { 00:05:41.248 "method": "env_dpdk_get_mem_stats", 00:05:41.248 "req_id": 1 00:05:41.248 } 00:05:41.248 Got JSON-RPC error response 00:05:41.248 response: 00:05:41.248 { 00:05:41.248 "code": -32601, 00:05:41.249 "message": "Method not found" 00:05:41.249 } 00:05:41.249 03:52:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:41.249 03:52:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.249 03:52:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.249 03:52:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.249 03:52:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4073079 00:05:41.249 03:52:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 4073079 ']' 00:05:41.249 03:52:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 4073079 00:05:41.249 03:52:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:41.249 03:52:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.249 03:52:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4073079 00:05:41.507 03:52:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.507 03:52:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.507 03:52:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4073079' 00:05:41.507 killing process with pid 4073079 00:05:41.507 03:52:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 4073079 00:05:41.507 03:52:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 4073079 00:05:41.765 00:05:41.765 real 0m1.343s 00:05:41.765 user 0m1.539s 00:05:41.765 sys 0m0.470s 00:05:41.765 03:52:40 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.765 03:52:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:41.765 ************************************ 00:05:41.765 END TEST app_cmdline 00:05:41.765 ************************************ 00:05:41.765 03:52:40 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:41.765 03:52:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.765 03:52:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.765 03:52:40 -- common/autotest_common.sh@10 -- # set +x 00:05:41.765 ************************************ 00:05:41.765 START TEST version 00:05:41.765 ************************************ 00:05:41.765 03:52:40 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:41.765 * Looking for test storage... 00:05:41.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:41.765 03:52:41 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.765 03:52:41 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.765 03:52:41 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.024 03:52:41 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.024 03:52:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.024 03:52:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.024 03:52:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.024 03:52:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.024 03:52:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.024 03:52:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.024 03:52:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.024 03:52:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.024 03:52:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.024 03:52:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.024 03:52:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.024 03:52:41 version -- scripts/common.sh@344 -- # case "$op" in 00:05:42.024 03:52:41 version -- scripts/common.sh@345 -- # : 1 00:05:42.024 03:52:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.024 03:52:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.024 03:52:41 version -- scripts/common.sh@365 -- # decimal 1 00:05:42.024 03:52:41 version -- scripts/common.sh@353 -- # local d=1 00:05:42.024 03:52:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.024 03:52:41 version -- scripts/common.sh@355 -- # echo 1 00:05:42.024 03:52:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.024 03:52:41 version -- scripts/common.sh@366 -- # decimal 2 00:05:42.024 03:52:41 version -- scripts/common.sh@353 -- # local d=2 00:05:42.024 03:52:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.024 03:52:41 version -- scripts/common.sh@355 -- # echo 2 00:05:42.024 03:52:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.024 03:52:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.024 03:52:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.024 03:52:41 version -- scripts/common.sh@368 -- # return 0 00:05:42.024 03:52:41 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.024 03:52:41 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.024 --rc genhtml_branch_coverage=1 00:05:42.024 --rc genhtml_function_coverage=1 00:05:42.024 --rc genhtml_legend=1 00:05:42.024 --rc geninfo_all_blocks=1 00:05:42.024 --rc geninfo_unexecuted_blocks=1 00:05:42.024 00:05:42.024 ' 00:05:42.024 03:52:41 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.024 --rc genhtml_branch_coverage=1 00:05:42.024 --rc genhtml_function_coverage=1 00:05:42.024 --rc genhtml_legend=1 00:05:42.024 --rc geninfo_all_blocks=1 00:05:42.024 --rc geninfo_unexecuted_blocks=1 00:05:42.024 00:05:42.024 ' 00:05:42.024 03:52:41 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.024 --rc genhtml_branch_coverage=1 00:05:42.024 --rc genhtml_function_coverage=1 00:05:42.024 --rc genhtml_legend=1 00:05:42.024 --rc geninfo_all_blocks=1 00:05:42.024 --rc geninfo_unexecuted_blocks=1 00:05:42.024 00:05:42.024 ' 00:05:42.024 03:52:41 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.024 --rc genhtml_branch_coverage=1 00:05:42.024 --rc genhtml_function_coverage=1 00:05:42.024 --rc genhtml_legend=1 00:05:42.024 --rc geninfo_all_blocks=1 00:05:42.024 --rc geninfo_unexecuted_blocks=1 00:05:42.024 00:05:42.024 ' 00:05:42.024 03:52:41 version -- app/version.sh@17 -- # get_header_version major 00:05:42.024 03:52:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.024 03:52:41 version -- app/version.sh@14 -- # cut -f2 00:05:42.024 03:52:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:42.024 03:52:41 version -- app/version.sh@17 -- # major=25 00:05:42.024 03:52:41 version -- app/version.sh@18 -- # get_header_version minor 00:05:42.024 03:52:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.024 03:52:41 version -- app/version.sh@14 -- # cut -f2 00:05:42.024 03:52:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:42.024 03:52:41 version -- app/version.sh@18 -- # minor=1 00:05:42.024 03:52:41 version -- app/version.sh@19 -- # get_header_version patch 00:05:42.024 03:52:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.024 03:52:41 version -- app/version.sh@14 -- # cut -f2 00:05:42.024 03:52:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:42.024 03:52:41 version -- app/version.sh@19 -- # patch=0 00:05:42.024 03:52:41 version -- app/version.sh@20 -- # get_header_version suffix 00:05:42.024 03:52:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.024 03:52:41 version -- app/version.sh@14 -- # cut -f2 00:05:42.024 03:52:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:42.024 03:52:41 version -- app/version.sh@20 -- # suffix=-pre 00:05:42.024 03:52:41 version -- app/version.sh@22 -- # version=25.1 00:05:42.024 03:52:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:42.024 03:52:41 version -- app/version.sh@28 -- # version=25.1rc0 00:05:42.024 03:52:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:42.024 03:52:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:42.024 03:52:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:42.024 03:52:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:42.024 00:05:42.024 real 0m0.243s 00:05:42.024 user 0m0.152s 00:05:42.024 sys 0m0.134s 00:05:42.024 03:52:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.024 03:52:41 version -- common/autotest_common.sh@10 -- # set +x 00:05:42.024 ************************************ 00:05:42.024 END TEST version 00:05:42.024 ************************************ 00:05:42.024 03:52:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:42.024 03:52:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:42.024 03:52:41 -- spdk/autotest.sh@194 -- # uname -s 00:05:42.024 03:52:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:42.024 03:52:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:42.024 03:52:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:42.024 03:52:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:42.024 03:52:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:42.024 03:52:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:42.024 03:52:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.024 03:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.024 03:52:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:42.024 03:52:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:42.024 03:52:41 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:42.024 03:52:41 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:42.024 03:52:41 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:42.024 03:52:41 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:42.024 03:52:41 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:42.024 03:52:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:42.024 03:52:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.024 03:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.024 ************************************ 00:05:42.024 START TEST nvmf_tcp 00:05:42.024 ************************************ 00:05:42.025 03:52:41 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:42.284 * Looking for test storage... 00:05:42.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.284 03:52:41 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.284 --rc genhtml_branch_coverage=1 00:05:42.284 --rc genhtml_function_coverage=1 00:05:42.284 --rc genhtml_legend=1 00:05:42.284 --rc geninfo_all_blocks=1 00:05:42.284 --rc geninfo_unexecuted_blocks=1 00:05:42.284 00:05:42.284 ' 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.284 --rc genhtml_branch_coverage=1 00:05:42.284 --rc genhtml_function_coverage=1 00:05:42.284 --rc genhtml_legend=1 00:05:42.284 --rc geninfo_all_blocks=1 00:05:42.284 --rc geninfo_unexecuted_blocks=1 00:05:42.284 00:05:42.284 ' 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.284 --rc genhtml_branch_coverage=1 00:05:42.284 --rc genhtml_function_coverage=1 00:05:42.284 --rc genhtml_legend=1 00:05:42.284 --rc geninfo_all_blocks=1 00:05:42.284 --rc geninfo_unexecuted_blocks=1 00:05:42.284 00:05:42.284 ' 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.284 --rc genhtml_branch_coverage=1 00:05:42.284 --rc genhtml_function_coverage=1 00:05:42.284 --rc genhtml_legend=1 00:05:42.284 --rc geninfo_all_blocks=1 00:05:42.284 --rc geninfo_unexecuted_blocks=1 00:05:42.284 00:05:42.284 ' 00:05:42.284 03:52:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:42.284 03:52:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:42.284 03:52:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.284 03:52:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.284 ************************************ 00:05:42.284 START TEST nvmf_target_core 00:05:42.284 ************************************ 00:05:42.285 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:42.544 * Looking for test storage... 00:05:42.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.545 --rc genhtml_branch_coverage=1 00:05:42.545 --rc genhtml_function_coverage=1 00:05:42.545 --rc genhtml_legend=1 00:05:42.545 --rc geninfo_all_blocks=1 00:05:42.545 --rc geninfo_unexecuted_blocks=1 00:05:42.545 00:05:42.545 ' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.545 --rc genhtml_branch_coverage=1 00:05:42.545 --rc genhtml_function_coverage=1 00:05:42.545 --rc genhtml_legend=1 00:05:42.545 --rc geninfo_all_blocks=1 00:05:42.545 --rc geninfo_unexecuted_blocks=1 00:05:42.545 00:05:42.545 ' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.545 --rc genhtml_branch_coverage=1 00:05:42.545 --rc genhtml_function_coverage=1 00:05:42.545 --rc genhtml_legend=1 00:05:42.545 --rc geninfo_all_blocks=1 00:05:42.545 --rc geninfo_unexecuted_blocks=1 00:05:42.545 00:05:42.545 ' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.545 --rc genhtml_branch_coverage=1 00:05:42.545 --rc genhtml_function_coverage=1 00:05:42.545 --rc genhtml_legend=1 00:05:42.545 --rc geninfo_all_blocks=1 00:05:42.545 --rc geninfo_unexecuted_blocks=1 00:05:42.545 00:05:42.545 ' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:42.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:42.545 ************************************ 00:05:42.545 START TEST nvmf_abort 00:05:42.545 ************************************ 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:42.545 * Looking for test storage... 00:05:42.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.545 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.546 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.805 --rc genhtml_branch_coverage=1 00:05:42.805 --rc genhtml_function_coverage=1 00:05:42.805 --rc genhtml_legend=1 00:05:42.805 --rc geninfo_all_blocks=1 00:05:42.805 --rc geninfo_unexecuted_blocks=1 00:05:42.805 00:05:42.805 ' 00:05:42.805 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.805 --rc genhtml_branch_coverage=1 00:05:42.806 --rc genhtml_function_coverage=1 00:05:42.806 --rc genhtml_legend=1 00:05:42.806 --rc geninfo_all_blocks=1 00:05:42.806 --rc geninfo_unexecuted_blocks=1 00:05:42.806 00:05:42.806 ' 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.806 --rc genhtml_branch_coverage=1 00:05:42.806 --rc genhtml_function_coverage=1 00:05:42.806 --rc genhtml_legend=1 00:05:42.806 --rc geninfo_all_blocks=1 00:05:42.806 --rc geninfo_unexecuted_blocks=1 00:05:42.806 00:05:42.806 ' 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.806 --rc genhtml_branch_coverage=1 00:05:42.806 --rc genhtml_function_coverage=1 00:05:42.806 --rc genhtml_legend=1 00:05:42.806 --rc geninfo_all_blocks=1 00:05:42.806 --rc geninfo_unexecuted_blocks=1 00:05:42.806 00:05:42.806 ' 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:42.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:42.806 03:52:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:49.374 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:49.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:49.374 Found net devices under 0000:af:00.0: cvl_0_0 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:49.374 Found net devices under 0000:af:00.1: cvl_0_1 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:49.374 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:49.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:49.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:05:49.375 00:05:49.375 --- 10.0.0.2 ping statistics --- 00:05:49.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:49.375 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:49.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:49.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:05:49.375 00:05:49.375 --- 10.0.0.1 ping statistics --- 00:05:49.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:49.375 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4076702 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4076702 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 4076702 ']' 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.375 03:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.375 [2024-12-10 03:52:48.017630] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:05:49.375 [2024-12-10 03:52:48.017678] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:49.375 [2024-12-10 03:52:48.106775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.375 [2024-12-10 03:52:48.148339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:49.375 [2024-12-10 03:52:48.148376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:49.375 [2024-12-10 03:52:48.148384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:49.375 [2024-12-10 03:52:48.148389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:49.375 [2024-12-10 03:52:48.148394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:49.375 [2024-12-10 03:52:48.149756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.375 [2024-12-10 03:52:48.149862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.375 [2024-12-10 03:52:48.149864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.634 [2024-12-10 03:52:48.896414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.634 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.892 Malloc0 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.892 Delay0 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.892 [2024-12-10 03:52:48.978490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.892 03:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:49.892 [2024-12-10 03:52:49.114856] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:52.424 Initializing NVMe Controllers 00:05:52.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:52.424 controller IO queue size 128 less than required 00:05:52.424 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:52.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:52.424 Initialization complete. Launching workers. 00:05:52.424 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37620 00:05:52.424 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37681, failed to submit 62 00:05:52.424 success 37624, unsuccessful 57, failed 0 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:52.424 rmmod nvme_tcp 00:05:52.424 rmmod nvme_fabrics 00:05:52.424 rmmod nvme_keyring 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4076702 ']' 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4076702 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 4076702 ']' 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 4076702 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4076702 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4076702' 00:05:52.424 killing process with pid 4076702 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 4076702 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 4076702 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:52.424 03:52:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.329 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:54.329 00:05:54.329 real 0m11.849s 00:05:54.329 user 0m13.633s 00:05:54.329 sys 0m5.449s 00:05:54.329 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.329 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.329 ************************************ 00:05:54.329 END TEST nvmf_abort 00:05:54.329 ************************************ 00:05:54.588 03:52:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:54.588 03:52:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.588 03:52:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.588 03:52:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:54.588 ************************************ 00:05:54.588 START TEST nvmf_ns_hotplug_stress 00:05:54.588 ************************************ 00:05:54.588 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:54.588 * Looking for test storage... 00:05:54.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.589 --rc genhtml_branch_coverage=1 00:05:54.589 --rc genhtml_function_coverage=1 00:05:54.589 --rc genhtml_legend=1 00:05:54.589 --rc geninfo_all_blocks=1 00:05:54.589 --rc geninfo_unexecuted_blocks=1 00:05:54.589 00:05:54.589 ' 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.589 --rc genhtml_branch_coverage=1 00:05:54.589 --rc genhtml_function_coverage=1 00:05:54.589 --rc genhtml_legend=1 00:05:54.589 --rc geninfo_all_blocks=1 00:05:54.589 --rc geninfo_unexecuted_blocks=1 00:05:54.589 00:05:54.589 ' 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.589 --rc genhtml_branch_coverage=1 00:05:54.589 --rc genhtml_function_coverage=1 00:05:54.589 --rc genhtml_legend=1 00:05:54.589 --rc geninfo_all_blocks=1 00:05:54.589 --rc geninfo_unexecuted_blocks=1 00:05:54.589 00:05:54.589 ' 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.589 --rc genhtml_branch_coverage=1 00:05:54.589 --rc genhtml_function_coverage=1 00:05:54.589 --rc genhtml_legend=1 00:05:54.589 --rc geninfo_all_blocks=1 00:05:54.589 --rc geninfo_unexecuted_blocks=1 00:05:54.589 00:05:54.589 ' 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:54.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:54.589 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:54.590 03:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:01.157 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:01.158 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:01.158 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:01.158 Found net devices under 0000:af:00.0: cvl_0_0 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:01.158 Found net devices under 0000:af:00.1: cvl_0_1 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:01.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:01.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:06:01.158 00:06:01.158 --- 10.0.0.2 ping statistics --- 00:06:01.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.158 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:01.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:01.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:06:01.158 00:06:01.158 --- 10.0.0.1 ping statistics --- 00:06:01.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.158 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4080871 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4080871 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 4080871 ']' 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.158 03:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:01.158 [2024-12-10 03:53:00.022349] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:06:01.158 [2024-12-10 03:53:00.022396] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:01.158 [2024-12-10 03:53:00.096917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.158 [2024-12-10 03:53:00.137773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:01.158 [2024-12-10 03:53:00.137808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:01.158 [2024-12-10 03:53:00.137816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.159 [2024-12-10 03:53:00.137822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.159 [2024-12-10 03:53:00.137827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:01.159 [2024-12-10 03:53:00.139196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.159 [2024-12-10 03:53:00.139284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.159 [2024-12-10 03:53:00.139284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.159 03:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.159 03:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:01.159 03:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:01.159 03:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.159 03:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:01.159 03:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:01.159 03:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:01.159 03:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:01.159 [2024-12-10 03:53:00.436700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.416 03:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:01.416 03:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:01.674 [2024-12-10 03:53:00.826078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.674 03:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:01.932 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:02.190 Malloc0 00:06:02.190 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:02.190 Delay0 00:06:02.448 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.448 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:02.706 NULL1 00:06:02.706 03:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:02.964 03:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:02.964 03:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4081133 00:06:02.964 03:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:02.964 03:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.222 03:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.222 03:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:03.222 03:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:03.480 true 00:06:03.480 03:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:03.480 03:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.737 03:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.995 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:03.995 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:03.995 true 00:06:03.995 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:03.995 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.252 Read completed with error (sct=0, sc=11) 00:06:04.252 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.510 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:04.510 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:04.767 true 00:06:04.767 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:04.767 03:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.699 03:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.699 03:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:05.699 03:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:05.955 true 00:06:05.955 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:05.955 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.210 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.467 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:06.467 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:06.467 true 00:06:06.467 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:06.467 03:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.839 03:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.839 03:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:07.839 03:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:08.097 true 00:06:08.097 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:08.097 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.028 03:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.028 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:09.029 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:09.286 true 00:06:09.286 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:09.286 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.543 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.543 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:09.543 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:09.800 true 00:06:09.800 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:09.800 03:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.172 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.172 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:11.172 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:11.429 true 00:06:11.429 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:11.429 03:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.362 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.362 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:12.362 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:12.620 true 00:06:12.620 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:12.620 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.878 03:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.135 03:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:13.135 03:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:13.135 true 00:06:13.135 03:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:13.135 03:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.508 03:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.508 03:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:14.508 03:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:14.766 true 00:06:14.766 03:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:14.766 03:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.699 03:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.699 03:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:15.699 03:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:15.958 true 00:06:15.958 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:15.958 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.216 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.474 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:16.474 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:16.474 true 00:06:16.474 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:16.474 03:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.848 03:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.848 03:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:17.848 03:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:18.106 true 00:06:18.106 03:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:18.106 03:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.038 03:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.038 03:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:19.038 03:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:19.296 true 00:06:19.296 03:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:19.296 03:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.554 03:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.812 03:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:19.812 03:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:19.812 true 00:06:19.812 03:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:19.812 03:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.186 03:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.186 03:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:21.186 03:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:21.444 true 00:06:21.444 03:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:21.444 03:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.378 03:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.378 03:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:22.378 03:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:22.636 true 00:06:22.636 03:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:22.636 03:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.893 03:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.151 03:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:23.151 03:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:23.151 true 00:06:23.151 03:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:23.151 03:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.524 03:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.524 03:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:24.524 03:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:24.782 true 00:06:24.782 03:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:24.782 03:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.716 03:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.716 03:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:25.716 03:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:25.973 true 00:06:25.973 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:25.973 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.974 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.231 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:26.231 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:26.489 true 00:06:26.489 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:26.489 03:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.861 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.861 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:27.861 03:53:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:28.123 true 00:06:28.123 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:28.123 03:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.768 03:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.046 03:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:29.046 03:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:29.320 true 00:06:29.320 03:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:29.320 03:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.602 03:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.602 03:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:29.602 03:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:29.871 true 00:06:29.871 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:29.871 03:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.244 03:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.244 03:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:31.244 03:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:31.502 true 00:06:31.502 03:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:31.502 03:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.434 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.434 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:32.434 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:32.692 true 00:06:32.692 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:32.692 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.692 03:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.950 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:32.950 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:33.207 true 00:06:33.207 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:33.207 03:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.140 Initializing NVMe Controllers 00:06:34.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:34.140 Controller IO queue size 128, less than required. 00:06:34.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:34.140 Controller IO queue size 128, less than required. 00:06:34.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:34.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:34.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:34.140 Initialization complete. Launching workers. 00:06:34.140 ======================================================== 00:06:34.140 Latency(us) 00:06:34.140 Device Information : IOPS MiB/s Average min max 00:06:34.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2156.83 1.05 41220.60 1758.10 1013311.07 00:06:34.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17348.48 8.47 7356.16 1581.63 369350.31 00:06:34.140 ======================================================== 00:06:34.140 Total : 19505.30 9.52 11100.77 1581.63 1013311.07 00:06:34.140 00:06:34.398 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.398 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:34.398 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:34.656 true 00:06:34.656 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4081133 00:06:34.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4081133) - No such process 00:06:34.656 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4081133 00:06:34.656 03:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.914 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:35.173 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:35.173 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:35.173 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:35.173 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.173 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:35.173 null0 00:06:35.173 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.173 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.173 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:35.430 null1 00:06:35.430 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.430 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.430 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:35.689 null2 00:06:35.689 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.689 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.689 03:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:35.957 null3 00:06:35.957 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.957 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.957 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:35.957 null4 00:06:36.218 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.218 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.218 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:36.218 null5 00:06:36.218 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.218 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.218 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:36.477 null6 00:06:36.477 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.477 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.477 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:36.736 null7 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:36.736 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4086827 4086829 4086830 4086832 4086834 4086836 4086838 4086840 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.737 03:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.996 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.255 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.513 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.772 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.772 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.772 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.772 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.772 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.772 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.772 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.772 03:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.031 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.290 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.548 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.548 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.549 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.549 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.549 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.549 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.549 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.549 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.807 03:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.066 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.066 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.066 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.066 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.066 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.066 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.066 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.066 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.066 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.066 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.066 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.324 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.583 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.841 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.842 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.842 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.842 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.842 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.842 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.842 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.842 03:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.100 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.101 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.101 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.101 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.101 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.101 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.359 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.359 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.359 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.360 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.618 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.618 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.618 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.618 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.618 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.618 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.619 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.619 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.877 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.877 03:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:40.877 rmmod nvme_tcp 00:06:40.877 rmmod nvme_fabrics 00:06:40.877 rmmod nvme_keyring 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4080871 ']' 00:06:40.877 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4080871 00:06:40.878 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 4080871 ']' 00:06:40.878 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 4080871 00:06:40.878 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:40.878 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.878 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4080871 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4080871' 00:06:41.137 killing process with pid 4080871 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 4080871 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 4080871 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.137 03:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:43.674 00:06:43.674 real 0m48.755s 00:06:43.674 user 3m19.073s 00:06:43.674 sys 0m16.042s 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.674 ************************************ 00:06:43.674 END TEST nvmf_ns_hotplug_stress 00:06:43.674 ************************************ 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:43.674 ************************************ 00:06:43.674 START TEST nvmf_delete_subsystem 00:06:43.674 ************************************ 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:43.674 * Looking for test storage... 00:06:43.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.674 --rc genhtml_branch_coverage=1 00:06:43.674 --rc genhtml_function_coverage=1 00:06:43.674 --rc genhtml_legend=1 00:06:43.674 --rc geninfo_all_blocks=1 00:06:43.674 --rc geninfo_unexecuted_blocks=1 00:06:43.674 00:06:43.674 ' 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.674 --rc genhtml_branch_coverage=1 00:06:43.674 --rc genhtml_function_coverage=1 00:06:43.674 --rc genhtml_legend=1 00:06:43.674 --rc geninfo_all_blocks=1 00:06:43.674 --rc geninfo_unexecuted_blocks=1 00:06:43.674 00:06:43.674 ' 00:06:43.674 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.675 --rc genhtml_branch_coverage=1 00:06:43.675 --rc genhtml_function_coverage=1 00:06:43.675 --rc genhtml_legend=1 00:06:43.675 --rc geninfo_all_blocks=1 00:06:43.675 --rc geninfo_unexecuted_blocks=1 00:06:43.675 00:06:43.675 ' 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.675 --rc genhtml_branch_coverage=1 00:06:43.675 --rc genhtml_function_coverage=1 00:06:43.675 --rc genhtml_legend=1 00:06:43.675 --rc geninfo_all_blocks=1 00:06:43.675 --rc geninfo_unexecuted_blocks=1 00:06:43.675 00:06:43.675 ' 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:43.675 03:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:50.244 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.244 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:50.245 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:50.245 Found net devices under 0000:af:00.0: cvl_0_0 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:50.245 Found net devices under 0000:af:00.1: cvl_0_1 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:50.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:06:50.245 00:06:50.245 --- 10.0.0.2 ping statistics --- 00:06:50.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.245 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:06:50.245 00:06:50.245 --- 10.0.0.1 ping statistics --- 00:06:50.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.245 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4091259 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4091259 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4091259 ']' 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.245 [2024-12-10 03:53:48.728147] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:06:50.245 [2024-12-10 03:53:48.728196] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.245 [2024-12-10 03:53:48.806407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.245 [2024-12-10 03:53:48.845835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.245 [2024-12-10 03:53:48.845872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.245 [2024-12-10 03:53:48.845880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.245 [2024-12-10 03:53:48.845885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.245 [2024-12-10 03:53:48.845891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.245 [2024-12-10 03:53:48.847033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.245 [2024-12-10 03:53:48.847035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.245 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.246 [2024-12-10 03:53:48.983940] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:50.246 03:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.246 [2024-12-10 03:53:49.004128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.246 NULL1 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.246 Delay0 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4091379 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:50.246 03:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:50.246 [2024-12-10 03:53:49.115031] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:52.144 03:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:52.144 03:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.144 03:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 [2024-12-10 03:53:51.190848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca6780 is same with the state(6) to be set 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 Read completed with error (sct=0, sc=8) 00:06:52.144 starting I/O failed: -6 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.144 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 Read completed with error (sct=0, sc=8) 00:06:52.145 starting I/O failed: -6 00:06:52.145 Write completed with error (sct=0, sc=8) 00:06:52.145 [2024-12-10 03:53:51.196727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d10000c80 is same with the state(6) to be set 00:06:52.145 starting I/O failed: -6 00:06:52.145 starting I/O failed: -6 00:06:52.145 starting I/O failed: -6 00:06:52.145 starting I/O failed: -6 00:06:52.145 starting I/O failed: -6 00:06:52.145 starting I/O failed: -6 00:06:52.145 starting I/O failed: -6 00:06:52.145 starting I/O failed: -6 00:06:52.145 starting I/O failed: -6 00:06:52.145 starting I/O failed: -6 00:06:53.077 [2024-12-10 03:53:52.167456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca79b0 is same with the state(6) to be set 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.077 Write completed with error (sct=0, sc=8) 00:06:53.077 Write completed with error (sct=0, sc=8) 00:06:53.077 Write completed with error (sct=0, sc=8) 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.077 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 [2024-12-10 03:53:52.194001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca62c0 is same with the state(6) to be set 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 [2024-12-10 03:53:52.194258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca6960 is same with the state(6) to be set 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 [2024-12-10 03:53:52.199006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d1000d800 is same with the state(6) to be set 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Read completed with error (sct=0, sc=8) 00:06:53.078 Write completed with error (sct=0, sc=8) 00:06:53.078 [2024-12-10 03:53:52.199722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d1000d060 is same with the state(6) to be set 00:06:53.078 Initializing NVMe Controllers 00:06:53.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:53.078 Controller IO queue size 128, less than required. 00:06:53.078 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:53.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:53.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:53.078 Initialization complete. Launching workers. 00:06:53.078 ======================================================== 00:06:53.078 Latency(us) 00:06:53.078 Device Information : IOPS MiB/s Average min max 00:06:53.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.74 0.08 911492.93 324.92 1006379.54 00:06:53.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.17 0.09 942490.14 363.00 2001673.09 00:06:53.078 ======================================================== 00:06:53.078 Total : 338.91 0.17 927606.02 324.92 2001673.09 00:06:53.078 00:06:53.078 [2024-12-10 03:53:52.200320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca79b0 (9): Bad file descriptor 00:06:53.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:53.078 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.078 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:53.078 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4091379 00:06:53.078 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4091379 00:06:53.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4091379) - No such process 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4091379 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4091379 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4091379 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.644 [2024-12-10 03:53:52.727768] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4091935 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4091935 00:06:53.644 03:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:53.644 [2024-12-10 03:53:52.816315] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:54.209 03:53:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:54.210 03:53:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4091935 00:06:54.210 03:53:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:54.774 03:53:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:54.774 03:53:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4091935 00:06:54.774 03:53:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.034 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:55.034 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4091935 00:06:55.034 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.600 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:55.600 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4091935 00:06:55.600 03:53:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.164 03:53:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.164 03:53:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4091935 00:06:56.164 03:53:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.728 03:53:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.728 03:53:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4091935 00:06:56.728 03:53:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.728 Initializing NVMe Controllers 00:06:56.728 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:56.728 Controller IO queue size 128, less than required. 00:06:56.728 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:56.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:56.728 Initialization complete. Launching workers. 00:06:56.728 ======================================================== 00:06:56.728 Latency(us) 00:06:56.728 Device Information : IOPS MiB/s Average min max 00:06:56.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002437.87 1000121.29 1007851.88 00:06:56.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004171.06 1000238.30 1041423.19 00:06:56.728 ======================================================== 00:06:56.728 Total : 256.00 0.12 1003304.46 1000121.29 1041423.19 00:06:56.728 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4091935 00:06:57.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4091935) - No such process 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4091935 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:57.294 rmmod nvme_tcp 00:06:57.294 rmmod nvme_fabrics 00:06:57.294 rmmod nvme_keyring 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:57.294 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4091259 ']' 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4091259 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4091259 ']' 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4091259 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4091259 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4091259' 00:06:57.295 killing process with pid 4091259 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4091259 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4091259 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.295 03:53:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:59.830 00:06:59.830 real 0m16.138s 00:06:59.830 user 0m29.103s 00:06:59.830 sys 0m5.443s 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.830 ************************************ 00:06:59.830 END TEST nvmf_delete_subsystem 00:06:59.830 ************************************ 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:59.830 ************************************ 00:06:59.830 START TEST nvmf_host_management 00:06:59.830 ************************************ 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:59.830 * Looking for test storage... 00:06:59.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.830 --rc genhtml_branch_coverage=1 00:06:59.830 --rc genhtml_function_coverage=1 00:06:59.830 --rc genhtml_legend=1 00:06:59.830 --rc geninfo_all_blocks=1 00:06:59.830 --rc geninfo_unexecuted_blocks=1 00:06:59.830 00:06:59.830 ' 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.830 --rc genhtml_branch_coverage=1 00:06:59.830 --rc genhtml_function_coverage=1 00:06:59.830 --rc genhtml_legend=1 00:06:59.830 --rc geninfo_all_blocks=1 00:06:59.830 --rc geninfo_unexecuted_blocks=1 00:06:59.830 00:06:59.830 ' 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.830 --rc genhtml_branch_coverage=1 00:06:59.830 --rc genhtml_function_coverage=1 00:06:59.830 --rc genhtml_legend=1 00:06:59.830 --rc geninfo_all_blocks=1 00:06:59.830 --rc geninfo_unexecuted_blocks=1 00:06:59.830 00:06:59.830 ' 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.830 --rc genhtml_branch_coverage=1 00:06:59.830 --rc genhtml_function_coverage=1 00:06:59.830 --rc genhtml_legend=1 00:06:59.830 --rc geninfo_all_blocks=1 00:06:59.830 --rc geninfo_unexecuted_blocks=1 00:06:59.830 00:06:59.830 ' 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.830 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:59.831 03:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:06.401 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:06.401 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:06.401 Found net devices under 0000:af:00.0: cvl_0_0 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:06.401 Found net devices under 0000:af:00.1: cvl_0_1 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.401 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:07:06.402 00:07:06.402 --- 10.0.0.2 ping statistics --- 00:07:06.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.402 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:07:06.402 00:07:06.402 --- 10.0.0.1 ping statistics --- 00:07:06.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.402 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4096123 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4096123 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4096123 ']' 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.402 03:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.402 [2024-12-10 03:54:05.020458] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:07:06.402 [2024-12-10 03:54:05.020507] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.402 [2024-12-10 03:54:05.100172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.402 [2024-12-10 03:54:05.141898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.402 [2024-12-10 03:54:05.141938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.402 [2024-12-10 03:54:05.141946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.402 [2024-12-10 03:54:05.141952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.402 [2024-12-10 03:54:05.141957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.402 [2024-12-10 03:54:05.143517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.402 [2024-12-10 03:54:05.143622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.402 [2024-12-10 03:54:05.143732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.402 [2024-12-10 03:54:05.143734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.402 [2024-12-10 03:54:05.281611] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.402 Malloc0 00:07:06.402 [2024-12-10 03:54:05.354309] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4096376 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4096376 /var/tmp/bdevperf.sock 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4096376 ']' 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:06.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:06.402 { 00:07:06.402 "params": { 00:07:06.402 "name": "Nvme$subsystem", 00:07:06.402 "trtype": "$TEST_TRANSPORT", 00:07:06.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:06.402 "adrfam": "ipv4", 00:07:06.402 "trsvcid": "$NVMF_PORT", 00:07:06.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:06.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:06.402 "hdgst": ${hdgst:-false}, 00:07:06.402 "ddgst": ${ddgst:-false} 00:07:06.402 }, 00:07:06.402 "method": "bdev_nvme_attach_controller" 00:07:06.402 } 00:07:06.402 EOF 00:07:06.402 )") 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:06.402 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:06.402 "params": { 00:07:06.402 "name": "Nvme0", 00:07:06.402 "trtype": "tcp", 00:07:06.402 "traddr": "10.0.0.2", 00:07:06.402 "adrfam": "ipv4", 00:07:06.402 "trsvcid": "4420", 00:07:06.402 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:06.402 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:06.402 "hdgst": false, 00:07:06.403 "ddgst": false 00:07:06.403 }, 00:07:06.403 "method": "bdev_nvme_attach_controller" 00:07:06.403 }' 00:07:06.403 [2024-12-10 03:54:05.450964] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:07:06.403 [2024-12-10 03:54:05.451007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096376 ] 00:07:06.403 [2024-12-10 03:54:05.524633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.403 [2024-12-10 03:54:05.564103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.678 Running I/O for 10 seconds... 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:07:06.678 03:54:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.962 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.962 [2024-12-10 03:54:06.173089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142f4d0 is same with the state(6) to be set 00:07:06.962 [2024-12-10 03:54:06.173282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.962 [2024-12-10 03:54:06.173579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.962 [2024-12-10 03:54:06.173587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.173992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.173999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.963 [2024-12-10 03:54:06.174177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.963 [2024-12-10 03:54:06.174185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.964 [2024-12-10 03:54:06.174194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.964 [2024-12-10 03:54:06.174202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.964 [2024-12-10 03:54:06.174211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.964 [2024-12-10 03:54:06.174218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.964 [2024-12-10 03:54:06.174226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.964 [2024-12-10 03:54:06.174233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.964 [2024-12-10 03:54:06.174241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.964 [2024-12-10 03:54:06.174247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.964 [2024-12-10 03:54:06.174255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.964 [2024-12-10 03:54:06.174262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.964 [2024-12-10 03:54:06.174270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.964 [2024-12-10 03:54:06.174276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:06.964 [2024-12-10 03:54:06.175233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:06.964 task offset: 102784 on job bdev=Nvme0n1 fails 00:07:06.964 00:07:06.964 Latency(us) 00:07:06.964 [2024-12-10T02:54:06.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.964 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:06.964 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:06.964 Verification LBA range: start 0x0 length 0x400 00:07:06.964 Nvme0n1 : 0.40 1911.26 119.45 159.27 0.00 30087.97 1552.58 26963.38 00:07:06.964 [2024-12-10T02:54:06.250Z] =================================================================================================================== 00:07:06.964 [2024-12-10T02:54:06.250Z] Total : 1911.26 119.45 159.27 0.00 30087.97 1552.58 26963.38 00:07:06.964 [2024-12-10 03:54:06.177591] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.964 [2024-12-10 03:54:06.177611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15307e0 (9): Bad file descriptor 00:07:06.964 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.964 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:06.964 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.964 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.964 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.964 03:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:07.226 [2024-12-10 03:54:06.279279] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4096376 00:07:08.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4096376) - No such process 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:08.162 { 00:07:08.162 "params": { 00:07:08.162 "name": "Nvme$subsystem", 00:07:08.162 "trtype": "$TEST_TRANSPORT", 00:07:08.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:08.162 "adrfam": "ipv4", 00:07:08.162 "trsvcid": "$NVMF_PORT", 00:07:08.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:08.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:08.162 "hdgst": ${hdgst:-false}, 00:07:08.162 "ddgst": ${ddgst:-false} 00:07:08.162 }, 00:07:08.162 "method": "bdev_nvme_attach_controller" 00:07:08.162 } 00:07:08.162 EOF 00:07:08.162 )") 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:08.162 03:54:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:08.162 "params": { 00:07:08.162 "name": "Nvme0", 00:07:08.162 "trtype": "tcp", 00:07:08.162 "traddr": "10.0.0.2", 00:07:08.162 "adrfam": "ipv4", 00:07:08.162 "trsvcid": "4420", 00:07:08.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:08.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:08.162 "hdgst": false, 00:07:08.162 "ddgst": false 00:07:08.162 }, 00:07:08.162 "method": "bdev_nvme_attach_controller" 00:07:08.162 }' 00:07:08.162 [2024-12-10 03:54:07.246882] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:07:08.162 [2024-12-10 03:54:07.246933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096911 ] 00:07:08.163 [2024-12-10 03:54:07.322465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.163 [2024-12-10 03:54:07.360109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.421 Running I/O for 1 seconds... 00:07:09.794 2011.00 IOPS, 125.69 MiB/s 00:07:09.794 Latency(us) 00:07:09.794 [2024-12-10T02:54:09.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.795 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:09.795 Verification LBA range: start 0x0 length 0x400 00:07:09.795 Nvme0n1 : 1.05 1969.05 123.07 0.00 0.00 30681.47 3401.63 45188.63 00:07:09.795 [2024-12-10T02:54:09.081Z] =================================================================================================================== 00:07:09.795 [2024-12-10T02:54:09.081Z] Total : 1969.05 123.07 0.00 0.00 30681.47 3401.63 45188.63 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:09.795 rmmod nvme_tcp 00:07:09.795 rmmod nvme_fabrics 00:07:09.795 rmmod nvme_keyring 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4096123 ']' 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4096123 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4096123 ']' 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4096123 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.795 03:54:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4096123 00:07:09.795 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:09.795 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:09.795 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4096123' 00:07:09.795 killing process with pid 4096123 00:07:09.795 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4096123 00:07:09.795 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4096123 00:07:10.053 [2024-12-10 03:54:09.180284] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.054 03:54:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:12.590 00:07:12.590 real 0m12.590s 00:07:12.590 user 0m20.272s 00:07:12.590 sys 0m5.629s 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.590 ************************************ 00:07:12.590 END TEST nvmf_host_management 00:07:12.590 ************************************ 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.590 ************************************ 00:07:12.590 START TEST nvmf_lvol 00:07:12.590 ************************************ 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:12.590 * Looking for test storage... 00:07:12.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.590 --rc genhtml_branch_coverage=1 00:07:12.590 --rc genhtml_function_coverage=1 00:07:12.590 --rc genhtml_legend=1 00:07:12.590 --rc geninfo_all_blocks=1 00:07:12.590 --rc geninfo_unexecuted_blocks=1 00:07:12.590 00:07:12.590 ' 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.590 --rc genhtml_branch_coverage=1 00:07:12.590 --rc genhtml_function_coverage=1 00:07:12.590 --rc genhtml_legend=1 00:07:12.590 --rc geninfo_all_blocks=1 00:07:12.590 --rc geninfo_unexecuted_blocks=1 00:07:12.590 00:07:12.590 ' 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.590 --rc genhtml_branch_coverage=1 00:07:12.590 --rc genhtml_function_coverage=1 00:07:12.590 --rc genhtml_legend=1 00:07:12.590 --rc geninfo_all_blocks=1 00:07:12.590 --rc geninfo_unexecuted_blocks=1 00:07:12.590 00:07:12.590 ' 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.590 --rc genhtml_branch_coverage=1 00:07:12.590 --rc genhtml_function_coverage=1 00:07:12.590 --rc genhtml_legend=1 00:07:12.590 --rc geninfo_all_blocks=1 00:07:12.590 --rc geninfo_unexecuted_blocks=1 00:07:12.590 00:07:12.590 ' 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.590 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:12.591 03:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:19.160 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:19.160 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.160 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:19.160 Found net devices under 0000:af:00.0: cvl_0_0 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:19.161 Found net devices under 0000:af:00.1: cvl_0_1 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:19.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:07:19.161 00:07:19.161 --- 10.0.0.2 ping statistics --- 00:07:19.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.161 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:07:19.161 00:07:19.161 --- 10.0.0.1 ping statistics --- 00:07:19.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.161 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4100753 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4100753 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4100753 ']' 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:19.161 [2024-12-10 03:54:17.549860] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:07:19.161 [2024-12-10 03:54:17.549907] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.161 [2024-12-10 03:54:17.629260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.161 [2024-12-10 03:54:17.670006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.161 [2024-12-10 03:54:17.670042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.161 [2024-12-10 03:54:17.670049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.161 [2024-12-10 03:54:17.670055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.161 [2024-12-10 03:54:17.670059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.161 [2024-12-10 03:54:17.671259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.161 [2024-12-10 03:54:17.671390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.161 [2024-12-10 03:54:17.671392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.161 03:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:19.161 [2024-12-10 03:54:17.973120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.161 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:19.161 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:19.161 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:19.161 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:19.161 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:19.419 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:19.677 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=12f53e9f-cc75-4771-9521-1b4026406e70 00:07:19.677 03:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 12f53e9f-cc75-4771-9521-1b4026406e70 lvol 20 00:07:19.935 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=38072db0-d930-4b2a-ace8-6520e250c8d0 00:07:19.935 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:20.192 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 38072db0-d930-4b2a-ace8-6520e250c8d0 00:07:20.192 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:20.450 [2024-12-10 03:54:19.600331] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.450 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.708 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4101205 00:07:20.708 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:20.708 03:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:21.641 03:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 38072db0-d930-4b2a-ace8-6520e250c8d0 MY_SNAPSHOT 00:07:21.899 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3c654174-b98b-4f35-9938-0b6ba0ea7903 00:07:21.899 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 38072db0-d930-4b2a-ace8-6520e250c8d0 30 00:07:22.157 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3c654174-b98b-4f35-9938-0b6ba0ea7903 MY_CLONE 00:07:22.414 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e05e8e2b-3e10-4220-9de7-dfb2cd81d769 00:07:22.414 03:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e05e8e2b-3e10-4220-9de7-dfb2cd81d769 00:07:22.980 03:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4101205 00:07:31.092 Initializing NVMe Controllers 00:07:31.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:31.092 Controller IO queue size 128, less than required. 00:07:31.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:31.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:31.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:31.092 Initialization complete. Launching workers. 00:07:31.092 ======================================================== 00:07:31.092 Latency(us) 00:07:31.092 Device Information : IOPS MiB/s Average min max 00:07:31.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12113.50 47.32 10570.18 1511.93 60921.35 00:07:31.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11986.00 46.82 10679.29 3535.67 58648.36 00:07:31.092 ======================================================== 00:07:31.092 Total : 24099.50 94.14 10624.45 1511.93 60921.35 00:07:31.092 00:07:31.092 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:31.350 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 38072db0-d930-4b2a-ace8-6520e250c8d0 00:07:31.608 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12f53e9f-cc75-4771-9521-1b4026406e70 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:31.867 rmmod nvme_tcp 00:07:31.867 rmmod nvme_fabrics 00:07:31.867 rmmod nvme_keyring 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4100753 ']' 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4100753 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4100753 ']' 00:07:31.867 03:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4100753 00:07:31.867 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:31.867 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.867 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4100753 00:07:31.867 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.867 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.867 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4100753' 00:07:31.867 killing process with pid 4100753 00:07:31.867 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4100753 00:07:31.867 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4100753 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.126 03:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:34.662 00:07:34.662 real 0m21.970s 00:07:34.662 user 1m3.239s 00:07:34.662 sys 0m7.673s 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:34.662 ************************************ 00:07:34.662 END TEST nvmf_lvol 00:07:34.662 ************************************ 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.662 ************************************ 00:07:34.662 START TEST nvmf_lvs_grow 00:07:34.662 ************************************ 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:34.662 * Looking for test storage... 00:07:34.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:34.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.662 --rc genhtml_branch_coverage=1 00:07:34.662 --rc genhtml_function_coverage=1 00:07:34.662 --rc genhtml_legend=1 00:07:34.662 --rc geninfo_all_blocks=1 00:07:34.662 --rc geninfo_unexecuted_blocks=1 00:07:34.662 00:07:34.662 ' 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:34.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.662 --rc genhtml_branch_coverage=1 00:07:34.662 --rc genhtml_function_coverage=1 00:07:34.662 --rc genhtml_legend=1 00:07:34.662 --rc geninfo_all_blocks=1 00:07:34.662 --rc geninfo_unexecuted_blocks=1 00:07:34.662 00:07:34.662 ' 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:34.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.662 --rc genhtml_branch_coverage=1 00:07:34.662 --rc genhtml_function_coverage=1 00:07:34.662 --rc genhtml_legend=1 00:07:34.662 --rc geninfo_all_blocks=1 00:07:34.662 --rc geninfo_unexecuted_blocks=1 00:07:34.662 00:07:34.662 ' 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:34.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.662 --rc genhtml_branch_coverage=1 00:07:34.662 --rc genhtml_function_coverage=1 00:07:34.662 --rc genhtml_legend=1 00:07:34.662 --rc geninfo_all_blocks=1 00:07:34.662 --rc geninfo_unexecuted_blocks=1 00:07:34.662 00:07:34.662 ' 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.662 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:34.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:34.663 03:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:41.234 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:41.234 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:41.234 Found net devices under 0000:af:00.0: cvl_0_0 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:41.234 Found net devices under 0000:af:00.1: cvl_0_1 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:41.234 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:41.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:07:41.235 00:07:41.235 --- 10.0.0.2 ping statistics --- 00:07:41.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.235 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:07:41.235 00:07:41.235 --- 10.0.0.1 ping statistics --- 00:07:41.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.235 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4106550 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4106550 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 4106550 ']' 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.235 [2024-12-10 03:54:39.632367] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:07:41.235 [2024-12-10 03:54:39.632418] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.235 [2024-12-10 03:54:39.713747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.235 [2024-12-10 03:54:39.754091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.235 [2024-12-10 03:54:39.754125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.235 [2024-12-10 03:54:39.754131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.235 [2024-12-10 03:54:39.754137] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.235 [2024-12-10 03:54:39.754144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.235 [2024-12-10 03:54:39.754670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.235 03:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:41.235 [2024-12-10 03:54:40.062765] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.235 ************************************ 00:07:41.235 START TEST lvs_grow_clean 00:07:41.235 ************************************ 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:41.235 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:41.494 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:41.494 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:41.494 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:41.494 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:41.494 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:41.494 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 lvol 150 00:07:41.752 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b1832e5c-e53b-46d8-bf01-2c2b0e157a58 00:07:41.752 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.752 03:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:42.009 [2024-12-10 03:54:41.103686] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:42.009 [2024-12-10 03:54:41.103736] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:42.009 true 00:07:42.009 03:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:42.009 03:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:42.009 03:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:42.009 03:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:42.267 03:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b1832e5c-e53b-46d8-bf01-2c2b0e157a58 00:07:42.525 03:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:42.525 [2024-12-10 03:54:41.805814] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.783 03:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.783 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4106966 00:07:42.783 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:42.783 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:42.783 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4106966 /var/tmp/bdevperf.sock 00:07:42.783 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 4106966 ']' 00:07:42.783 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:42.783 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.783 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:42.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:42.783 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.783 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:42.783 [2024-12-10 03:54:42.052433] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:07:42.783 [2024-12-10 03:54:42.052478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4106966 ] 00:07:43.041 [2024-12-10 03:54:42.123964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.041 [2024-12-10 03:54:42.162788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.041 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.041 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:43.041 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:43.607 Nvme0n1 00:07:43.607 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:43.607 [ 00:07:43.607 { 00:07:43.607 "name": "Nvme0n1", 00:07:43.607 "aliases": [ 00:07:43.607 "b1832e5c-e53b-46d8-bf01-2c2b0e157a58" 00:07:43.607 ], 00:07:43.607 "product_name": "NVMe disk", 00:07:43.607 "block_size": 4096, 00:07:43.607 "num_blocks": 38912, 00:07:43.607 "uuid": "b1832e5c-e53b-46d8-bf01-2c2b0e157a58", 00:07:43.607 "numa_id": 1, 00:07:43.607 "assigned_rate_limits": { 00:07:43.607 "rw_ios_per_sec": 0, 00:07:43.607 "rw_mbytes_per_sec": 0, 00:07:43.607 "r_mbytes_per_sec": 0, 00:07:43.607 "w_mbytes_per_sec": 0 00:07:43.607 }, 00:07:43.607 "claimed": false, 00:07:43.607 "zoned": false, 00:07:43.607 "supported_io_types": { 00:07:43.607 "read": true, 00:07:43.607 "write": true, 00:07:43.607 "unmap": true, 00:07:43.607 "flush": true, 00:07:43.607 "reset": true, 00:07:43.607 "nvme_admin": true, 00:07:43.607 "nvme_io": true, 00:07:43.607 "nvme_io_md": false, 00:07:43.607 "write_zeroes": true, 00:07:43.607 "zcopy": false, 00:07:43.607 "get_zone_info": false, 00:07:43.607 "zone_management": false, 00:07:43.607 "zone_append": false, 00:07:43.607 "compare": true, 00:07:43.607 "compare_and_write": true, 00:07:43.607 "abort": true, 00:07:43.607 "seek_hole": false, 00:07:43.607 "seek_data": false, 00:07:43.607 "copy": true, 00:07:43.607 "nvme_iov_md": false 00:07:43.607 }, 00:07:43.607 "memory_domains": [ 00:07:43.607 { 00:07:43.607 "dma_device_id": "system", 00:07:43.607 "dma_device_type": 1 00:07:43.607 } 00:07:43.607 ], 00:07:43.607 "driver_specific": { 00:07:43.607 "nvme": [ 00:07:43.607 { 00:07:43.607 "trid": { 00:07:43.607 "trtype": "TCP", 00:07:43.607 "adrfam": "IPv4", 00:07:43.607 "traddr": "10.0.0.2", 00:07:43.607 "trsvcid": "4420", 00:07:43.607 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:43.607 }, 00:07:43.607 "ctrlr_data": { 00:07:43.607 "cntlid": 1, 00:07:43.607 "vendor_id": "0x8086", 00:07:43.607 "model_number": "SPDK bdev Controller", 00:07:43.607 "serial_number": "SPDK0", 00:07:43.607 "firmware_revision": "25.01", 00:07:43.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:43.607 "oacs": { 00:07:43.607 "security": 0, 00:07:43.607 "format": 0, 00:07:43.607 "firmware": 0, 00:07:43.607 "ns_manage": 0 00:07:43.607 }, 00:07:43.607 "multi_ctrlr": true, 00:07:43.607 "ana_reporting": false 00:07:43.607 }, 00:07:43.607 "vs": { 00:07:43.607 "nvme_version": "1.3" 00:07:43.607 }, 00:07:43.607 "ns_data": { 00:07:43.607 "id": 1, 00:07:43.607 "can_share": true 00:07:43.607 } 00:07:43.607 } 00:07:43.607 ], 00:07:43.607 "mp_policy": "active_passive" 00:07:43.607 } 00:07:43.607 } 00:07:43.607 ] 00:07:43.607 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4107192 00:07:43.607 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:43.607 03:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:43.607 Running I/O for 10 seconds... 00:07:44.981 Latency(us) 00:07:44.981 [2024-12-10T02:54:44.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.981 Nvme0n1 : 1.00 23363.00 91.26 0.00 0.00 0.00 0.00 0.00 00:07:44.981 [2024-12-10T02:54:44.267Z] =================================================================================================================== 00:07:44.981 [2024-12-10T02:54:44.267Z] Total : 23363.00 91.26 0.00 0.00 0.00 0.00 0.00 00:07:44.981 00:07:45.546 03:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:45.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.804 Nvme0n1 : 2.00 23612.00 92.23 0.00 0.00 0.00 0.00 0.00 00:07:45.804 [2024-12-10T02:54:45.090Z] =================================================================================================================== 00:07:45.804 [2024-12-10T02:54:45.090Z] Total : 23612.00 92.23 0.00 0.00 0.00 0.00 0.00 00:07:45.804 00:07:45.804 true 00:07:45.804 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:45.804 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:46.061 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:46.062 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:46.062 03:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4107192 00:07:46.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.627 Nvme0n1 : 3.00 23680.67 92.50 0.00 0.00 0.00 0.00 0.00 00:07:46.627 [2024-12-10T02:54:45.913Z] =================================================================================================================== 00:07:46.627 [2024-12-10T02:54:45.913Z] Total : 23680.67 92.50 0.00 0.00 0.00 0.00 0.00 00:07:46.627 00:07:48.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.001 Nvme0n1 : 4.00 23734.00 92.71 0.00 0.00 0.00 0.00 0.00 00:07:48.001 [2024-12-10T02:54:47.287Z] =================================================================================================================== 00:07:48.001 [2024-12-10T02:54:47.287Z] Total : 23734.00 92.71 0.00 0.00 0.00 0.00 0.00 00:07:48.001 00:07:48.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.935 Nvme0n1 : 5.00 23793.00 92.94 0.00 0.00 0.00 0.00 0.00 00:07:48.935 [2024-12-10T02:54:48.221Z] =================================================================================================================== 00:07:48.935 [2024-12-10T02:54:48.221Z] Total : 23793.00 92.94 0.00 0.00 0.00 0.00 0.00 00:07:48.935 00:07:49.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.868 Nvme0n1 : 6.00 23840.17 93.13 0.00 0.00 0.00 0.00 0.00 00:07:49.868 [2024-12-10T02:54:49.155Z] =================================================================================================================== 00:07:49.869 [2024-12-10T02:54:49.155Z] Total : 23840.17 93.13 0.00 0.00 0.00 0.00 0.00 00:07:49.869 00:07:50.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.802 Nvme0n1 : 7.00 23870.43 93.24 0.00 0.00 0.00 0.00 0.00 00:07:50.802 [2024-12-10T02:54:50.088Z] =================================================================================================================== 00:07:50.802 [2024-12-10T02:54:50.088Z] Total : 23870.43 93.24 0.00 0.00 0.00 0.00 0.00 00:07:50.802 00:07:51.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.735 Nvme0n1 : 8.00 23819.00 93.04 0.00 0.00 0.00 0.00 0.00 00:07:51.735 [2024-12-10T02:54:51.021Z] =================================================================================================================== 00:07:51.735 [2024-12-10T02:54:51.021Z] Total : 23819.00 93.04 0.00 0.00 0.00 0.00 0.00 00:07:51.735 00:07:52.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.670 Nvme0n1 : 9.00 23846.22 93.15 0.00 0.00 0.00 0.00 0.00 00:07:52.670 [2024-12-10T02:54:51.956Z] =================================================================================================================== 00:07:52.670 [2024-12-10T02:54:51.956Z] Total : 23846.22 93.15 0.00 0.00 0.00 0.00 0.00 00:07:52.670 00:07:54.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.044 Nvme0n1 : 10.00 23868.70 93.24 0.00 0.00 0.00 0.00 0.00 00:07:54.044 [2024-12-10T02:54:53.330Z] =================================================================================================================== 00:07:54.044 [2024-12-10T02:54:53.330Z] Total : 23868.70 93.24 0.00 0.00 0.00 0.00 0.00 00:07:54.044 00:07:54.044 00:07:54.044 Latency(us) 00:07:54.044 [2024-12-10T02:54:53.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.044 Nvme0n1 : 10.00 23874.29 93.26 0.00 0.00 5358.63 3105.16 12732.71 00:07:54.044 [2024-12-10T02:54:53.330Z] =================================================================================================================== 00:07:54.044 [2024-12-10T02:54:53.330Z] Total : 23874.29 93.26 0.00 0.00 5358.63 3105.16 12732.71 00:07:54.044 { 00:07:54.044 "results": [ 00:07:54.044 { 00:07:54.044 "job": "Nvme0n1", 00:07:54.044 "core_mask": "0x2", 00:07:54.044 "workload": "randwrite", 00:07:54.044 "status": "finished", 00:07:54.044 "queue_depth": 128, 00:07:54.044 "io_size": 4096, 00:07:54.044 "runtime": 10.003022, 00:07:54.044 "iops": 23874.285191015275, 00:07:54.044 "mibps": 93.25892652740342, 00:07:54.044 "io_failed": 0, 00:07:54.044 "io_timeout": 0, 00:07:54.044 "avg_latency_us": 5358.632184187202, 00:07:54.044 "min_latency_us": 3105.158095238095, 00:07:54.044 "max_latency_us": 12732.708571428571 00:07:54.044 } 00:07:54.044 ], 00:07:54.044 "core_count": 1 00:07:54.044 } 00:07:54.044 03:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4106966 00:07:54.044 03:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 4106966 ']' 00:07:54.044 03:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 4106966 00:07:54.044 03:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:54.044 03:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.044 03:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4106966 00:07:54.044 03:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:54.044 03:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:54.044 03:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4106966' 00:07:54.044 killing process with pid 4106966 00:07:54.044 03:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 4106966 00:07:54.044 Received shutdown signal, test time was about 10.000000 seconds 00:07:54.044 00:07:54.044 Latency(us) 00:07:54.044 [2024-12-10T02:54:53.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.044 [2024-12-10T02:54:53.330Z] =================================================================================================================== 00:07:54.044 [2024-12-10T02:54:53.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:54.044 03:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 4106966 00:07:54.044 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:54.044 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:54.302 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:54.302 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:54.560 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:54.560 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:54.560 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:54.818 [2024-12-10 03:54:53.888864] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:54.818 03:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:55.076 request: 00:07:55.076 { 00:07:55.076 "uuid": "6836aed0-e59b-40c2-a650-22a2d24c15d4", 00:07:55.076 "method": "bdev_lvol_get_lvstores", 00:07:55.076 "req_id": 1 00:07:55.076 } 00:07:55.076 Got JSON-RPC error response 00:07:55.076 response: 00:07:55.076 { 00:07:55.076 "code": -19, 00:07:55.076 "message": "No such device" 00:07:55.076 } 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:55.076 aio_bdev 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b1832e5c-e53b-46d8-bf01-2c2b0e157a58 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b1832e5c-e53b-46d8-bf01-2c2b0e157a58 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.076 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:55.334 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b1832e5c-e53b-46d8-bf01-2c2b0e157a58 -t 2000 00:07:55.592 [ 00:07:55.592 { 00:07:55.592 "name": "b1832e5c-e53b-46d8-bf01-2c2b0e157a58", 00:07:55.592 "aliases": [ 00:07:55.592 "lvs/lvol" 00:07:55.592 ], 00:07:55.592 "product_name": "Logical Volume", 00:07:55.592 "block_size": 4096, 00:07:55.592 "num_blocks": 38912, 00:07:55.592 "uuid": "b1832e5c-e53b-46d8-bf01-2c2b0e157a58", 00:07:55.592 "assigned_rate_limits": { 00:07:55.592 "rw_ios_per_sec": 0, 00:07:55.592 "rw_mbytes_per_sec": 0, 00:07:55.592 "r_mbytes_per_sec": 0, 00:07:55.592 "w_mbytes_per_sec": 0 00:07:55.592 }, 00:07:55.592 "claimed": false, 00:07:55.592 "zoned": false, 00:07:55.592 "supported_io_types": { 00:07:55.592 "read": true, 00:07:55.592 "write": true, 00:07:55.592 "unmap": true, 00:07:55.592 "flush": false, 00:07:55.592 "reset": true, 00:07:55.592 "nvme_admin": false, 00:07:55.592 "nvme_io": false, 00:07:55.592 "nvme_io_md": false, 00:07:55.592 "write_zeroes": true, 00:07:55.592 "zcopy": false, 00:07:55.592 "get_zone_info": false, 00:07:55.592 "zone_management": false, 00:07:55.592 "zone_append": false, 00:07:55.592 "compare": false, 00:07:55.592 "compare_and_write": false, 00:07:55.592 "abort": false, 00:07:55.592 "seek_hole": true, 00:07:55.592 "seek_data": true, 00:07:55.592 "copy": false, 00:07:55.592 "nvme_iov_md": false 00:07:55.592 }, 00:07:55.592 "driver_specific": { 00:07:55.592 "lvol": { 00:07:55.592 "lvol_store_uuid": "6836aed0-e59b-40c2-a650-22a2d24c15d4", 00:07:55.592 "base_bdev": "aio_bdev", 00:07:55.592 "thin_provision": false, 00:07:55.592 "num_allocated_clusters": 38, 00:07:55.592 "snapshot": false, 00:07:55.592 "clone": false, 00:07:55.592 "esnap_clone": false 00:07:55.592 } 00:07:55.592 } 00:07:55.592 } 00:07:55.592 ] 00:07:55.592 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:55.592 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:55.592 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:55.592 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:55.592 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:55.592 03:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:55.850 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:55.850 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b1832e5c-e53b-46d8-bf01-2c2b0e157a58 00:07:56.108 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6836aed0-e59b-40c2-a650-22a2d24c15d4 00:07:56.366 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:56.366 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:56.366 00:07:56.366 real 0m15.494s 00:07:56.366 user 0m15.036s 00:07:56.366 sys 0m1.485s 00:07:56.366 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.366 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:56.366 ************************************ 00:07:56.366 END TEST lvs_grow_clean 00:07:56.366 ************************************ 00:07:56.624 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:56.624 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:56.625 ************************************ 00:07:56.625 START TEST lvs_grow_dirty 00:07:56.625 ************************************ 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:56.625 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:56.882 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:56.882 03:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:56.882 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:07:56.882 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:07:56.882 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:57.139 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:57.139 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:57.139 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c lvol 150 00:07:57.398 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2 00:07:57.398 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:57.398 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:57.398 [2024-12-10 03:54:56.652048] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:57.398 [2024-12-10 03:54:56.652098] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:57.398 true 00:07:57.398 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:57.398 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:07:57.656 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:57.656 03:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.914 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2 00:07:58.172 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:58.172 [2024-12-10 03:54:57.394242] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.172 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:58.429 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4109710 00:07:58.429 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.429 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:58.429 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4109710 /var/tmp/bdevperf.sock 00:07:58.430 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4109710 ']' 00:07:58.430 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:58.430 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.430 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:58.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:58.430 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.430 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:58.430 [2024-12-10 03:54:57.634017] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:07:58.430 [2024-12-10 03:54:57.634063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109710 ] 00:07:58.430 [2024-12-10 03:54:57.707997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.687 [2024-12-10 03:54:57.746704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.687 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.687 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:58.687 03:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:58.945 Nvme0n1 00:07:58.945 03:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:59.203 [ 00:07:59.203 { 00:07:59.203 "name": "Nvme0n1", 00:07:59.203 "aliases": [ 00:07:59.203 "7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2" 00:07:59.203 ], 00:07:59.203 "product_name": "NVMe disk", 00:07:59.203 "block_size": 4096, 00:07:59.203 "num_blocks": 38912, 00:07:59.203 "uuid": "7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2", 00:07:59.203 "numa_id": 1, 00:07:59.203 "assigned_rate_limits": { 00:07:59.203 "rw_ios_per_sec": 0, 00:07:59.203 "rw_mbytes_per_sec": 0, 00:07:59.203 "r_mbytes_per_sec": 0, 00:07:59.203 "w_mbytes_per_sec": 0 00:07:59.203 }, 00:07:59.203 "claimed": false, 00:07:59.203 "zoned": false, 00:07:59.203 "supported_io_types": { 00:07:59.203 "read": true, 00:07:59.203 "write": true, 00:07:59.203 "unmap": true, 00:07:59.203 "flush": true, 00:07:59.203 "reset": true, 00:07:59.203 "nvme_admin": true, 00:07:59.203 "nvme_io": true, 00:07:59.203 "nvme_io_md": false, 00:07:59.203 "write_zeroes": true, 00:07:59.203 "zcopy": false, 00:07:59.203 "get_zone_info": false, 00:07:59.203 "zone_management": false, 00:07:59.203 "zone_append": false, 00:07:59.203 "compare": true, 00:07:59.203 "compare_and_write": true, 00:07:59.203 "abort": true, 00:07:59.203 "seek_hole": false, 00:07:59.203 "seek_data": false, 00:07:59.203 "copy": true, 00:07:59.203 "nvme_iov_md": false 00:07:59.203 }, 00:07:59.203 "memory_domains": [ 00:07:59.203 { 00:07:59.203 "dma_device_id": "system", 00:07:59.203 "dma_device_type": 1 00:07:59.203 } 00:07:59.203 ], 00:07:59.203 "driver_specific": { 00:07:59.203 "nvme": [ 00:07:59.203 { 00:07:59.203 "trid": { 00:07:59.203 "trtype": "TCP", 00:07:59.203 "adrfam": "IPv4", 00:07:59.203 "traddr": "10.0.0.2", 00:07:59.203 "trsvcid": "4420", 00:07:59.203 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:59.203 }, 00:07:59.203 "ctrlr_data": { 00:07:59.203 "cntlid": 1, 00:07:59.203 "vendor_id": "0x8086", 00:07:59.203 "model_number": "SPDK bdev Controller", 00:07:59.203 "serial_number": "SPDK0", 00:07:59.203 "firmware_revision": "25.01", 00:07:59.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:59.203 "oacs": { 00:07:59.203 "security": 0, 00:07:59.203 "format": 0, 00:07:59.203 "firmware": 0, 00:07:59.203 "ns_manage": 0 00:07:59.203 }, 00:07:59.203 "multi_ctrlr": true, 00:07:59.203 "ana_reporting": false 00:07:59.203 }, 00:07:59.203 "vs": { 00:07:59.203 "nvme_version": "1.3" 00:07:59.203 }, 00:07:59.203 "ns_data": { 00:07:59.203 "id": 1, 00:07:59.203 "can_share": true 00:07:59.203 } 00:07:59.203 } 00:07:59.203 ], 00:07:59.203 "mp_policy": "active_passive" 00:07:59.203 } 00:07:59.203 } 00:07:59.203 ] 00:07:59.203 03:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4109726 00:07:59.203 03:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:59.203 03:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:59.461 Running I/O for 10 seconds... 00:08:00.395 Latency(us) 00:08:00.395 [2024-12-10T02:54:59.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.395 Nvme0n1 : 1.00 23563.00 92.04 0.00 0.00 0.00 0.00 0.00 00:08:00.395 [2024-12-10T02:54:59.681Z] =================================================================================================================== 00:08:00.395 [2024-12-10T02:54:59.681Z] Total : 23563.00 92.04 0.00 0.00 0.00 0.00 0.00 00:08:00.395 00:08:01.329 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:08:01.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.329 Nvme0n1 : 2.00 23562.50 92.04 0.00 0.00 0.00 0.00 0.00 00:08:01.329 [2024-12-10T02:55:00.615Z] =================================================================================================================== 00:08:01.329 [2024-12-10T02:55:00.615Z] Total : 23562.50 92.04 0.00 0.00 0.00 0.00 0.00 00:08:01.329 00:08:01.329 true 00:08:01.588 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:08:01.588 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:01.588 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:01.588 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:01.588 03:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4109726 00:08:02.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.529 Nvme0n1 : 3.00 23587.00 92.14 0.00 0.00 0.00 0.00 0.00 00:08:02.529 [2024-12-10T02:55:01.815Z] =================================================================================================================== 00:08:02.529 [2024-12-10T02:55:01.815Z] Total : 23587.00 92.14 0.00 0.00 0.00 0.00 0.00 00:08:02.529 00:08:03.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.463 Nvme0n1 : 4.00 23694.00 92.55 0.00 0.00 0.00 0.00 0.00 00:08:03.463 [2024-12-10T02:55:02.749Z] =================================================================================================================== 00:08:03.463 [2024-12-10T02:55:02.749Z] Total : 23694.00 92.55 0.00 0.00 0.00 0.00 0.00 00:08:03.463 00:08:04.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.396 Nvme0n1 : 5.00 23757.20 92.80 0.00 0.00 0.00 0.00 0.00 00:08:04.396 [2024-12-10T02:55:03.682Z] =================================================================================================================== 00:08:04.396 [2024-12-10T02:55:03.682Z] Total : 23757.20 92.80 0.00 0.00 0.00 0.00 0.00 00:08:04.396 00:08:05.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.330 Nvme0n1 : 6.00 23810.00 93.01 0.00 0.00 0.00 0.00 0.00 00:08:05.330 [2024-12-10T02:55:04.616Z] =================================================================================================================== 00:08:05.330 [2024-12-10T02:55:04.616Z] Total : 23810.00 93.01 0.00 0.00 0.00 0.00 0.00 00:08:05.330 00:08:06.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.264 Nvme0n1 : 7.00 23850.86 93.17 0.00 0.00 0.00 0.00 0.00 00:08:06.264 [2024-12-10T02:55:05.550Z] =================================================================================================================== 00:08:06.264 [2024-12-10T02:55:05.550Z] Total : 23850.86 93.17 0.00 0.00 0.00 0.00 0.00 00:08:06.264 00:08:07.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.283 Nvme0n1 : 8.00 23870.25 93.24 0.00 0.00 0.00 0.00 0.00 00:08:07.283 [2024-12-10T02:55:06.569Z] =================================================================================================================== 00:08:07.283 [2024-12-10T02:55:06.569Z] Total : 23870.25 93.24 0.00 0.00 0.00 0.00 0.00 00:08:07.283 00:08:08.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.662 Nvme0n1 : 9.00 23885.33 93.30 0.00 0.00 0.00 0.00 0.00 00:08:08.662 [2024-12-10T02:55:07.948Z] =================================================================================================================== 00:08:08.662 [2024-12-10T02:55:07.948Z] Total : 23885.33 93.30 0.00 0.00 0.00 0.00 0.00 00:08:08.662 00:08:09.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.596 Nvme0n1 : 10.00 23910.10 93.40 0.00 0.00 0.00 0.00 0.00 00:08:09.596 [2024-12-10T02:55:08.882Z] =================================================================================================================== 00:08:09.596 [2024-12-10T02:55:08.882Z] Total : 23910.10 93.40 0.00 0.00 0.00 0.00 0.00 00:08:09.596 00:08:09.596 00:08:09.596 Latency(us) 00:08:09.596 [2024-12-10T02:55:08.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.596 Nvme0n1 : 10.01 23910.46 93.40 0.00 0.00 5350.44 3120.76 10735.42 00:08:09.596 [2024-12-10T02:55:08.882Z] =================================================================================================================== 00:08:09.596 [2024-12-10T02:55:08.882Z] Total : 23910.46 93.40 0.00 0.00 5350.44 3120.76 10735.42 00:08:09.596 { 00:08:09.596 "results": [ 00:08:09.596 { 00:08:09.596 "job": "Nvme0n1", 00:08:09.596 "core_mask": "0x2", 00:08:09.596 "workload": "randwrite", 00:08:09.596 "status": "finished", 00:08:09.596 "queue_depth": 128, 00:08:09.596 "io_size": 4096, 00:08:09.596 "runtime": 10.005203, 00:08:09.596 "iops": 23910.459387980434, 00:08:09.596 "mibps": 93.40023198429857, 00:08:09.596 "io_failed": 0, 00:08:09.596 "io_timeout": 0, 00:08:09.596 "avg_latency_us": 5350.4404509645965, 00:08:09.596 "min_latency_us": 3120.7619047619046, 00:08:09.596 "max_latency_us": 10735.420952380953 00:08:09.596 } 00:08:09.596 ], 00:08:09.596 "core_count": 1 00:08:09.596 } 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4109710 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 4109710 ']' 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 4109710 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4109710 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4109710' 00:08:09.596 killing process with pid 4109710 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 4109710 00:08:09.596 Received shutdown signal, test time was about 10.000000 seconds 00:08:09.596 00:08:09.596 Latency(us) 00:08:09.596 [2024-12-10T02:55:08.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.596 [2024-12-10T02:55:08.882Z] =================================================================================================================== 00:08:09.596 [2024-12-10T02:55:08.882Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 4109710 00:08:09.596 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.855 03:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:10.113 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:08:10.113 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:10.113 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:10.113 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:10.113 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4106550 00:08:10.113 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4106550 00:08:10.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4106550 Killed "${NVMF_APP[@]}" "$@" 00:08:10.371 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4111536 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4111536 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4111536 ']' 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.372 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.372 [2024-12-10 03:55:09.450401] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:08:10.372 [2024-12-10 03:55:09.450443] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.372 [2024-12-10 03:55:09.529184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.372 [2024-12-10 03:55:09.568303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.372 [2024-12-10 03:55:09.568339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.372 [2024-12-10 03:55:09.568346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.372 [2024-12-10 03:55:09.568352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.372 [2024-12-10 03:55:09.568357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.372 [2024-12-10 03:55:09.568820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.629 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.629 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:10.629 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:10.629 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.629 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.630 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.630 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.630 [2024-12-10 03:55:09.873665] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:10.630 [2024-12-10 03:55:09.873745] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:10.630 [2024-12-10 03:55:09.873768] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:10.630 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:10.630 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2 00:08:10.630 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2 00:08:10.630 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.630 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:10.630 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.630 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.630 03:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:10.887 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2 -t 2000 00:08:11.145 [ 00:08:11.145 { 00:08:11.145 "name": "7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2", 00:08:11.145 "aliases": [ 00:08:11.145 "lvs/lvol" 00:08:11.145 ], 00:08:11.145 "product_name": "Logical Volume", 00:08:11.145 "block_size": 4096, 00:08:11.145 "num_blocks": 38912, 00:08:11.145 "uuid": "7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2", 00:08:11.145 "assigned_rate_limits": { 00:08:11.145 "rw_ios_per_sec": 0, 00:08:11.145 "rw_mbytes_per_sec": 0, 00:08:11.145 "r_mbytes_per_sec": 0, 00:08:11.145 "w_mbytes_per_sec": 0 00:08:11.145 }, 00:08:11.145 "claimed": false, 00:08:11.145 "zoned": false, 00:08:11.145 "supported_io_types": { 00:08:11.145 "read": true, 00:08:11.145 "write": true, 00:08:11.145 "unmap": true, 00:08:11.145 "flush": false, 00:08:11.145 "reset": true, 00:08:11.145 "nvme_admin": false, 00:08:11.145 "nvme_io": false, 00:08:11.145 "nvme_io_md": false, 00:08:11.145 "write_zeroes": true, 00:08:11.145 "zcopy": false, 00:08:11.145 "get_zone_info": false, 00:08:11.145 "zone_management": false, 00:08:11.145 "zone_append": false, 00:08:11.145 "compare": false, 00:08:11.145 "compare_and_write": false, 00:08:11.145 "abort": false, 00:08:11.145 "seek_hole": true, 00:08:11.145 "seek_data": true, 00:08:11.145 "copy": false, 00:08:11.145 "nvme_iov_md": false 00:08:11.145 }, 00:08:11.145 "driver_specific": { 00:08:11.145 "lvol": { 00:08:11.145 "lvol_store_uuid": "d6711d2a-8c81-45b9-8f45-fae0b2a43d3c", 00:08:11.145 "base_bdev": "aio_bdev", 00:08:11.145 "thin_provision": false, 00:08:11.145 "num_allocated_clusters": 38, 00:08:11.145 "snapshot": false, 00:08:11.145 "clone": false, 00:08:11.145 "esnap_clone": false 00:08:11.145 } 00:08:11.145 } 00:08:11.145 } 00:08:11.145 ] 00:08:11.145 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:11.146 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:08:11.146 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:11.404 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:11.404 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:08:11.404 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:11.404 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:11.404 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.661 [2024-12-10 03:55:10.834814] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:11.661 03:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:08:11.918 request: 00:08:11.918 { 00:08:11.918 "uuid": "d6711d2a-8c81-45b9-8f45-fae0b2a43d3c", 00:08:11.918 "method": "bdev_lvol_get_lvstores", 00:08:11.918 "req_id": 1 00:08:11.918 } 00:08:11.918 Got JSON-RPC error response 00:08:11.918 response: 00:08:11.918 { 00:08:11.918 "code": -19, 00:08:11.918 "message": "No such device" 00:08:11.918 } 00:08:11.918 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:11.918 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.918 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:11.918 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.918 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:12.176 aio_bdev 00:08:12.176 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2 00:08:12.176 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2 00:08:12.176 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.176 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:12.176 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.176 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.176 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:12.176 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2 -t 2000 00:08:12.434 [ 00:08:12.434 { 00:08:12.434 "name": "7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2", 00:08:12.434 "aliases": [ 00:08:12.434 "lvs/lvol" 00:08:12.434 ], 00:08:12.434 "product_name": "Logical Volume", 00:08:12.434 "block_size": 4096, 00:08:12.434 "num_blocks": 38912, 00:08:12.434 "uuid": "7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2", 00:08:12.434 "assigned_rate_limits": { 00:08:12.434 "rw_ios_per_sec": 0, 00:08:12.434 "rw_mbytes_per_sec": 0, 00:08:12.434 "r_mbytes_per_sec": 0, 00:08:12.434 "w_mbytes_per_sec": 0 00:08:12.434 }, 00:08:12.434 "claimed": false, 00:08:12.434 "zoned": false, 00:08:12.434 "supported_io_types": { 00:08:12.434 "read": true, 00:08:12.434 "write": true, 00:08:12.434 "unmap": true, 00:08:12.434 "flush": false, 00:08:12.434 "reset": true, 00:08:12.434 "nvme_admin": false, 00:08:12.434 "nvme_io": false, 00:08:12.434 "nvme_io_md": false, 00:08:12.434 "write_zeroes": true, 00:08:12.434 "zcopy": false, 00:08:12.434 "get_zone_info": false, 00:08:12.434 "zone_management": false, 00:08:12.434 "zone_append": false, 00:08:12.434 "compare": false, 00:08:12.434 "compare_and_write": false, 00:08:12.434 "abort": false, 00:08:12.434 "seek_hole": true, 00:08:12.434 "seek_data": true, 00:08:12.434 "copy": false, 00:08:12.434 "nvme_iov_md": false 00:08:12.434 }, 00:08:12.434 "driver_specific": { 00:08:12.434 "lvol": { 00:08:12.434 "lvol_store_uuid": "d6711d2a-8c81-45b9-8f45-fae0b2a43d3c", 00:08:12.434 "base_bdev": "aio_bdev", 00:08:12.434 "thin_provision": false, 00:08:12.434 "num_allocated_clusters": 38, 00:08:12.434 "snapshot": false, 00:08:12.434 "clone": false, 00:08:12.434 "esnap_clone": false 00:08:12.434 } 00:08:12.434 } 00:08:12.434 } 00:08:12.434 ] 00:08:12.434 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:12.434 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:08:12.434 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:12.692 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:12.692 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:08:12.692 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:12.950 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:12.950 03:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e7eafe6-424b-4b50-8fe0-b1c344e4c0e2 00:08:12.950 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6711d2a-8c81-45b9-8f45-fae0b2a43d3c 00:08:13.209 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:13.466 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.466 00:08:13.466 real 0m16.932s 00:08:13.466 user 0m43.602s 00:08:13.466 sys 0m3.760s 00:08:13.466 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:13.467 ************************************ 00:08:13.467 END TEST lvs_grow_dirty 00:08:13.467 ************************************ 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:13.467 nvmf_trace.0 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.467 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:13.467 rmmod nvme_tcp 00:08:13.467 rmmod nvme_fabrics 00:08:13.467 rmmod nvme_keyring 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4111536 ']' 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4111536 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 4111536 ']' 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 4111536 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4111536 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4111536' 00:08:13.726 killing process with pid 4111536 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 4111536 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 4111536 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.726 03:55:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:16.260 00:08:16.260 real 0m41.650s 00:08:16.260 user 1m4.273s 00:08:16.260 sys 0m10.123s 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.260 ************************************ 00:08:16.260 END TEST nvmf_lvs_grow 00:08:16.260 ************************************ 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.260 ************************************ 00:08:16.260 START TEST nvmf_bdev_io_wait 00:08:16.260 ************************************ 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:16.260 * Looking for test storage... 00:08:16.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:16.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.260 --rc genhtml_branch_coverage=1 00:08:16.260 --rc genhtml_function_coverage=1 00:08:16.260 --rc genhtml_legend=1 00:08:16.260 --rc geninfo_all_blocks=1 00:08:16.260 --rc geninfo_unexecuted_blocks=1 00:08:16.260 00:08:16.260 ' 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:16.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.260 --rc genhtml_branch_coverage=1 00:08:16.260 --rc genhtml_function_coverage=1 00:08:16.260 --rc genhtml_legend=1 00:08:16.260 --rc geninfo_all_blocks=1 00:08:16.260 --rc geninfo_unexecuted_blocks=1 00:08:16.260 00:08:16.260 ' 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:16.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.260 --rc genhtml_branch_coverage=1 00:08:16.260 --rc genhtml_function_coverage=1 00:08:16.260 --rc genhtml_legend=1 00:08:16.260 --rc geninfo_all_blocks=1 00:08:16.260 --rc geninfo_unexecuted_blocks=1 00:08:16.260 00:08:16.260 ' 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:16.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.260 --rc genhtml_branch_coverage=1 00:08:16.260 --rc genhtml_function_coverage=1 00:08:16.260 --rc genhtml_legend=1 00:08:16.260 --rc geninfo_all_blocks=1 00:08:16.260 --rc geninfo_unexecuted_blocks=1 00:08:16.260 00:08:16.260 ' 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.260 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:16.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:16.261 03:55:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.828 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.828 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.828 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.828 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:22.829 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:22.829 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:22.829 Found net devices under 0000:af:00.0: cvl_0_0 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:22.829 Found net devices under 0000:af:00.1: cvl_0_1 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:08:22.829 00:08:22.829 --- 10.0.0.2 ping statistics --- 00:08:22.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.829 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:08:22.829 00:08:22.829 --- 10.0.0.1 ping statistics --- 00:08:22.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.829 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.829 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4115740 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4115740 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 4115740 ']' 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 [2024-12-10 03:55:21.372970] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:08:22.830 [2024-12-10 03:55:21.373014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.830 [2024-12-10 03:55:21.449545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.830 [2024-12-10 03:55:21.491046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.830 [2024-12-10 03:55:21.491082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.830 [2024-12-10 03:55:21.491089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.830 [2024-12-10 03:55:21.491096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.830 [2024-12-10 03:55:21.491101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.830 [2024-12-10 03:55:21.492419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.830 [2024-12-10 03:55:21.492526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.830 [2024-12-10 03:55:21.492561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.830 [2024-12-10 03:55:21.492561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 [2024-12-10 03:55:21.628416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 Malloc0 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 [2024-12-10 03:55:21.671573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4115765 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4115767 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.830 { 00:08:22.830 "params": { 00:08:22.830 "name": "Nvme$subsystem", 00:08:22.830 "trtype": "$TEST_TRANSPORT", 00:08:22.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.830 "adrfam": "ipv4", 00:08:22.830 "trsvcid": "$NVMF_PORT", 00:08:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.830 "hdgst": ${hdgst:-false}, 00:08:22.830 "ddgst": ${ddgst:-false} 00:08:22.830 }, 00:08:22.830 "method": "bdev_nvme_attach_controller" 00:08:22.830 } 00:08:22.830 EOF 00:08:22.830 )") 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4115769 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.830 { 00:08:22.830 "params": { 00:08:22.830 "name": "Nvme$subsystem", 00:08:22.830 "trtype": "$TEST_TRANSPORT", 00:08:22.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.830 "adrfam": "ipv4", 00:08:22.830 "trsvcid": "$NVMF_PORT", 00:08:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.830 "hdgst": ${hdgst:-false}, 00:08:22.830 "ddgst": ${ddgst:-false} 00:08:22.830 }, 00:08:22.830 "method": "bdev_nvme_attach_controller" 00:08:22.830 } 00:08:22.830 EOF 00:08:22.830 )") 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4115772 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:22.830 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.830 { 00:08:22.830 "params": { 00:08:22.830 "name": "Nvme$subsystem", 00:08:22.830 "trtype": "$TEST_TRANSPORT", 00:08:22.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.831 "adrfam": "ipv4", 00:08:22.831 "trsvcid": "$NVMF_PORT", 00:08:22.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.831 "hdgst": ${hdgst:-false}, 00:08:22.831 "ddgst": ${ddgst:-false} 00:08:22.831 }, 00:08:22.831 "method": "bdev_nvme_attach_controller" 00:08:22.831 } 00:08:22.831 EOF 00:08:22.831 )") 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.831 { 00:08:22.831 "params": { 00:08:22.831 "name": "Nvme$subsystem", 00:08:22.831 "trtype": "$TEST_TRANSPORT", 00:08:22.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.831 "adrfam": "ipv4", 00:08:22.831 "trsvcid": "$NVMF_PORT", 00:08:22.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.831 "hdgst": ${hdgst:-false}, 00:08:22.831 "ddgst": ${ddgst:-false} 00:08:22.831 }, 00:08:22.831 "method": "bdev_nvme_attach_controller" 00:08:22.831 } 00:08:22.831 EOF 00:08:22.831 )") 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4115765 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.831 "params": { 00:08:22.831 "name": "Nvme1", 00:08:22.831 "trtype": "tcp", 00:08:22.831 "traddr": "10.0.0.2", 00:08:22.831 "adrfam": "ipv4", 00:08:22.831 "trsvcid": "4420", 00:08:22.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:22.831 "hdgst": false, 00:08:22.831 "ddgst": false 00:08:22.831 }, 00:08:22.831 "method": "bdev_nvme_attach_controller" 00:08:22.831 }' 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.831 "params": { 00:08:22.831 "name": "Nvme1", 00:08:22.831 "trtype": "tcp", 00:08:22.831 "traddr": "10.0.0.2", 00:08:22.831 "adrfam": "ipv4", 00:08:22.831 "trsvcid": "4420", 00:08:22.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:22.831 "hdgst": false, 00:08:22.831 "ddgst": false 00:08:22.831 }, 00:08:22.831 "method": "bdev_nvme_attach_controller" 00:08:22.831 }' 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.831 "params": { 00:08:22.831 "name": "Nvme1", 00:08:22.831 "trtype": "tcp", 00:08:22.831 "traddr": "10.0.0.2", 00:08:22.831 "adrfam": "ipv4", 00:08:22.831 "trsvcid": "4420", 00:08:22.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:22.831 "hdgst": false, 00:08:22.831 "ddgst": false 00:08:22.831 }, 00:08:22.831 "method": "bdev_nvme_attach_controller" 00:08:22.831 }' 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:22.831 03:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.831 "params": { 00:08:22.831 "name": "Nvme1", 00:08:22.831 "trtype": "tcp", 00:08:22.831 "traddr": "10.0.0.2", 00:08:22.831 "adrfam": "ipv4", 00:08:22.831 "trsvcid": "4420", 00:08:22.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:22.831 "hdgst": false, 00:08:22.831 "ddgst": false 00:08:22.831 }, 00:08:22.831 "method": "bdev_nvme_attach_controller" 00:08:22.831 }' 00:08:22.831 [2024-12-10 03:55:21.722780] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:08:22.831 [2024-12-10 03:55:21.722780] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:08:22.831 [2024-12-10 03:55:21.722831] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 03:55:21.722831] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:22.831 --proc-type=auto ] 00:08:22.831 [2024-12-10 03:55:21.722853] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:08:22.831 [2024-12-10 03:55:21.722888] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:22.831 [2024-12-10 03:55:21.728087] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:08:22.831 [2024-12-10 03:55:21.728135] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:22.831 [2024-12-10 03:55:21.911726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.831 [2024-12-10 03:55:21.944447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.831 [2024-12-10 03:55:21.958699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:22.831 [2024-12-10 03:55:21.981377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:22.831 [2024-12-10 03:55:22.056634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.831 [2024-12-10 03:55:22.101221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:23.089 [2024-12-10 03:55:22.157849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.089 [2024-12-10 03:55:22.209548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:23.089 Running I/O for 1 seconds... 00:08:23.346 Running I/O for 1 seconds... 00:08:23.346 Running I/O for 1 seconds... 00:08:23.346 Running I/O for 1 seconds... 00:08:24.280 243656.00 IOPS, 951.78 MiB/s 00:08:24.280 Latency(us) 00:08:24.280 [2024-12-10T02:55:23.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.280 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:24.280 Nvme1n1 : 1.00 243288.85 950.35 0.00 0.00 523.23 219.43 1482.36 00:08:24.280 [2024-12-10T02:55:23.566Z] =================================================================================================================== 00:08:24.280 [2024-12-10T02:55:23.566Z] Total : 243288.85 950.35 0.00 0.00 523.23 219.43 1482.36 00:08:24.280 13254.00 IOPS, 51.77 MiB/s 00:08:24.280 Latency(us) 00:08:24.280 [2024-12-10T02:55:23.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.280 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:24.280 Nvme1n1 : 1.01 13315.69 52.01 0.00 0.00 9584.28 4431.48 16227.96 00:08:24.280 [2024-12-10T02:55:23.566Z] =================================================================================================================== 00:08:24.280 [2024-12-10T02:55:23.566Z] Total : 13315.69 52.01 0.00 0.00 9584.28 4431.48 16227.96 00:08:24.280 9564.00 IOPS, 37.36 MiB/s 00:08:24.280 Latency(us) 00:08:24.280 [2024-12-10T02:55:23.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.280 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:24.280 Nvme1n1 : 1.01 9619.95 37.58 0.00 0.00 13251.46 6584.81 22219.82 00:08:24.280 [2024-12-10T02:55:23.566Z] =================================================================================================================== 00:08:24.280 [2024-12-10T02:55:23.566Z] Total : 9619.95 37.58 0.00 0.00 13251.46 6584.81 22219.82 00:08:24.280 9501.00 IOPS, 37.11 MiB/s 00:08:24.280 Latency(us) 00:08:24.280 [2024-12-10T02:55:23.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.280 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:24.280 Nvme1n1 : 1.01 9583.24 37.43 0.00 0.00 13317.51 3994.58 25964.74 00:08:24.280 [2024-12-10T02:55:23.566Z] =================================================================================================================== 00:08:24.280 [2024-12-10T02:55:23.566Z] Total : 9583.24 37.43 0.00 0.00 13317.51 3994.58 25964.74 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4115767 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4115769 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4115772 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:24.538 rmmod nvme_tcp 00:08:24.538 rmmod nvme_fabrics 00:08:24.538 rmmod nvme_keyring 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4115740 ']' 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4115740 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 4115740 ']' 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 4115740 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4115740 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4115740' 00:08:24.538 killing process with pid 4115740 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 4115740 00:08:24.538 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 4115740 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.797 03:55:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.701 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.701 00:08:26.701 real 0m10.843s 00:08:26.701 user 0m16.536s 00:08:26.701 sys 0m6.284s 00:08:26.701 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.701 03:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.701 ************************************ 00:08:26.701 END TEST nvmf_bdev_io_wait 00:08:26.702 ************************************ 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.960 ************************************ 00:08:26.960 START TEST nvmf_queue_depth 00:08:26.960 ************************************ 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:26.960 * Looking for test storage... 00:08:26.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:26.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.960 --rc genhtml_branch_coverage=1 00:08:26.960 --rc genhtml_function_coverage=1 00:08:26.960 --rc genhtml_legend=1 00:08:26.960 --rc geninfo_all_blocks=1 00:08:26.960 --rc geninfo_unexecuted_blocks=1 00:08:26.960 00:08:26.960 ' 00:08:26.960 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:26.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.960 --rc genhtml_branch_coverage=1 00:08:26.960 --rc genhtml_function_coverage=1 00:08:26.961 --rc genhtml_legend=1 00:08:26.961 --rc geninfo_all_blocks=1 00:08:26.961 --rc geninfo_unexecuted_blocks=1 00:08:26.961 00:08:26.961 ' 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.961 --rc genhtml_branch_coverage=1 00:08:26.961 --rc genhtml_function_coverage=1 00:08:26.961 --rc genhtml_legend=1 00:08:26.961 --rc geninfo_all_blocks=1 00:08:26.961 --rc geninfo_unexecuted_blocks=1 00:08:26.961 00:08:26.961 ' 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.961 --rc genhtml_branch_coverage=1 00:08:26.961 --rc genhtml_function_coverage=1 00:08:26.961 --rc genhtml_legend=1 00:08:26.961 --rc geninfo_all_blocks=1 00:08:26.961 --rc geninfo_unexecuted_blocks=1 00:08:26.961 00:08:26.961 ' 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.961 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:27.220 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:27.221 03:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:33.788 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:33.788 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:33.788 Found net devices under 0000:af:00.0: cvl_0_0 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:33.788 Found net devices under 0000:af:00.1: cvl_0_1 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.788 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.789 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:33.789 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:33.789 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.789 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.789 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:33.789 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:33.789 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.789 03:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:08:33.789 00:08:33.789 --- 10.0.0.2 ping statistics --- 00:08:33.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.789 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:08:33.789 00:08:33.789 --- 10.0.0.1 ping statistics --- 00:08:33.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.789 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4119704 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4119704 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4119704 ']' 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.789 [2024-12-10 03:55:32.288151] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:08:33.789 [2024-12-10 03:55:32.288210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.789 [2024-12-10 03:55:32.369412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.789 [2024-12-10 03:55:32.406945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.789 [2024-12-10 03:55:32.406980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.789 [2024-12-10 03:55:32.406987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.789 [2024-12-10 03:55:32.406993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.789 [2024-12-10 03:55:32.406999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.789 [2024-12-10 03:55:32.407506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.789 [2024-12-10 03:55:32.550498] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.789 Malloc0 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.789 [2024-12-10 03:55:32.600661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4119729 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4119729 /var/tmp/bdevperf.sock 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4119729 ']' 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.789 [2024-12-10 03:55:32.649972] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:08:33.789 [2024-12-10 03:55:32.650012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119729 ] 00:08:33.789 [2024-12-10 03:55:32.723237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.789 [2024-12-10 03:55:32.762361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:33.789 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.790 03:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.790 NVMe0n1 00:08:33.790 03:55:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.790 03:55:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.047 Running I/O for 10 seconds... 00:08:35.916 12288.00 IOPS, 48.00 MiB/s [2024-12-10T02:55:36.576Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-10T02:55:37.510Z] 12480.00 IOPS, 48.75 MiB/s [2024-12-10T02:55:38.443Z] 12528.50 IOPS, 48.94 MiB/s [2024-12-10T02:55:39.374Z] 12538.20 IOPS, 48.98 MiB/s [2024-12-10T02:55:40.307Z] 12601.50 IOPS, 49.22 MiB/s [2024-12-10T02:55:41.244Z] 12563.14 IOPS, 49.07 MiB/s [2024-12-10T02:55:42.618Z] 12593.62 IOPS, 49.19 MiB/s [2024-12-10T02:55:43.552Z] 12600.22 IOPS, 49.22 MiB/s [2024-12-10T02:55:43.552Z] 12607.50 IOPS, 49.25 MiB/s 00:08:44.266 Latency(us) 00:08:44.266 [2024-12-10T02:55:43.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.266 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:44.266 Verification LBA range: start 0x0 length 0x4000 00:08:44.266 NVMe0n1 : 10.05 12638.84 49.37 0.00 0.00 80741.63 8613.30 52678.46 00:08:44.266 [2024-12-10T02:55:43.552Z] =================================================================================================================== 00:08:44.266 [2024-12-10T02:55:43.552Z] Total : 12638.84 49.37 0.00 0.00 80741.63 8613.30 52678.46 00:08:44.266 { 00:08:44.266 "results": [ 00:08:44.266 { 00:08:44.266 "job": "NVMe0n1", 00:08:44.266 "core_mask": "0x1", 00:08:44.266 "workload": "verify", 00:08:44.266 "status": "finished", 00:08:44.266 "verify_range": { 00:08:44.266 "start": 0, 00:08:44.266 "length": 16384 00:08:44.266 }, 00:08:44.266 "queue_depth": 1024, 00:08:44.266 "io_size": 4096, 00:08:44.266 "runtime": 10.04689, 00:08:44.266 "iops": 12638.836495671794, 00:08:44.266 "mibps": 49.370455061217946, 00:08:44.266 "io_failed": 0, 00:08:44.266 "io_timeout": 0, 00:08:44.266 "avg_latency_us": 80741.62895125293, 00:08:44.266 "min_latency_us": 8613.302857142857, 00:08:44.266 "max_latency_us": 52678.460952380956 00:08:44.266 } 00:08:44.266 ], 00:08:44.266 "core_count": 1 00:08:44.266 } 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4119729 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4119729 ']' 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4119729 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4119729 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4119729' 00:08:44.266 killing process with pid 4119729 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4119729 00:08:44.266 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.266 00:08:44.266 Latency(us) 00:08:44.266 [2024-12-10T02:55:43.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.266 [2024-12-10T02:55:43.552Z] =================================================================================================================== 00:08:44.266 [2024-12-10T02:55:43.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4119729 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.266 rmmod nvme_tcp 00:08:44.266 rmmod nvme_fabrics 00:08:44.266 rmmod nvme_keyring 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4119704 ']' 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4119704 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4119704 ']' 00:08:44.266 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4119704 00:08:44.267 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:44.267 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.267 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4119704 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4119704' 00:08:44.525 killing process with pid 4119704 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4119704 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4119704 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.525 03:55:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.061 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.061 00:08:47.061 real 0m19.776s 00:08:47.061 user 0m23.212s 00:08:47.061 sys 0m6.035s 00:08:47.061 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.061 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.061 ************************************ 00:08:47.061 END TEST nvmf_queue_depth 00:08:47.061 ************************************ 00:08:47.061 03:55:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:47.061 03:55:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.061 03:55:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.061 03:55:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.061 ************************************ 00:08:47.061 START TEST nvmf_target_multipath 00:08:47.061 ************************************ 00:08:47.061 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:47.061 * Looking for test storage... 00:08:47.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.061 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:47.061 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:47.061 03:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.061 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:47.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.061 --rc genhtml_branch_coverage=1 00:08:47.061 --rc genhtml_function_coverage=1 00:08:47.061 --rc genhtml_legend=1 00:08:47.061 --rc geninfo_all_blocks=1 00:08:47.061 --rc geninfo_unexecuted_blocks=1 00:08:47.062 00:08:47.062 ' 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:47.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.062 --rc genhtml_branch_coverage=1 00:08:47.062 --rc genhtml_function_coverage=1 00:08:47.062 --rc genhtml_legend=1 00:08:47.062 --rc geninfo_all_blocks=1 00:08:47.062 --rc geninfo_unexecuted_blocks=1 00:08:47.062 00:08:47.062 ' 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:47.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.062 --rc genhtml_branch_coverage=1 00:08:47.062 --rc genhtml_function_coverage=1 00:08:47.062 --rc genhtml_legend=1 00:08:47.062 --rc geninfo_all_blocks=1 00:08:47.062 --rc geninfo_unexecuted_blocks=1 00:08:47.062 00:08:47.062 ' 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:47.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.062 --rc genhtml_branch_coverage=1 00:08:47.062 --rc genhtml_function_coverage=1 00:08:47.062 --rc genhtml_legend=1 00:08:47.062 --rc geninfo_all_blocks=1 00:08:47.062 --rc geninfo_unexecuted_blocks=1 00:08:47.062 00:08:47.062 ' 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.062 03:55:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.631 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:53.632 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:53.632 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:53.632 Found net devices under 0000:af:00.0: cvl_0_0 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:53.632 Found net devices under 0000:af:00.1: cvl_0_1 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.632 03:55:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:08:53.632 00:08:53.632 --- 10.0.0.2 ping statistics --- 00:08:53.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.632 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:08:53.632 00:08:53.632 --- 10.0.0.1 ping statistics --- 00:08:53.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.632 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:53.632 only one NIC for nvmf test 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:53.632 rmmod nvme_tcp 00:08:53.632 rmmod nvme_fabrics 00:08:53.632 rmmod nvme_keyring 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:53.632 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.633 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:55.010 00:08:55.010 real 0m8.333s 00:08:55.010 user 0m1.845s 00:08:55.010 sys 0m4.513s 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:55.010 ************************************ 00:08:55.010 END TEST nvmf_target_multipath 00:08:55.010 ************************************ 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.010 03:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.270 ************************************ 00:08:55.270 START TEST nvmf_zcopy 00:08:55.270 ************************************ 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:55.270 * Looking for test storage... 00:08:55.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.270 --rc genhtml_branch_coverage=1 00:08:55.270 --rc genhtml_function_coverage=1 00:08:55.270 --rc genhtml_legend=1 00:08:55.270 --rc geninfo_all_blocks=1 00:08:55.270 --rc geninfo_unexecuted_blocks=1 00:08:55.270 00:08:55.270 ' 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.270 --rc genhtml_branch_coverage=1 00:08:55.270 --rc genhtml_function_coverage=1 00:08:55.270 --rc genhtml_legend=1 00:08:55.270 --rc geninfo_all_blocks=1 00:08:55.270 --rc geninfo_unexecuted_blocks=1 00:08:55.270 00:08:55.270 ' 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.270 --rc genhtml_branch_coverage=1 00:08:55.270 --rc genhtml_function_coverage=1 00:08:55.270 --rc genhtml_legend=1 00:08:55.270 --rc geninfo_all_blocks=1 00:08:55.270 --rc geninfo_unexecuted_blocks=1 00:08:55.270 00:08:55.270 ' 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:55.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.270 --rc genhtml_branch_coverage=1 00:08:55.270 --rc genhtml_function_coverage=1 00:08:55.270 --rc genhtml_legend=1 00:08:55.270 --rc geninfo_all_blocks=1 00:08:55.270 --rc geninfo_unexecuted_blocks=1 00:08:55.270 00:08:55.270 ' 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.270 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:55.271 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:01.837 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:01.837 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:01.837 Found net devices under 0000:af:00.0: cvl_0_0 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.837 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:01.837 Found net devices under 0000:af:00.1: cvl_0_1 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:01.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:09:01.838 00:09:01.838 --- 10.0.0.2 ping statistics --- 00:09:01.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.838 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:09:01.838 00:09:01.838 --- 10.0.0.1 ping statistics --- 00:09:01.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.838 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4128459 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4128459 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 4128459 ']' 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.838 [2024-12-10 03:56:00.504763] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:09:01.838 [2024-12-10 03:56:00.504811] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.838 [2024-12-10 03:56:00.580666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.838 [2024-12-10 03:56:00.618850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.838 [2024-12-10 03:56:00.618885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.838 [2024-12-10 03:56:00.618893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.838 [2024-12-10 03:56:00.618899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.838 [2024-12-10 03:56:00.618904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.838 [2024-12-10 03:56:00.619386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.838 [2024-12-10 03:56:00.763291] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.838 [2024-12-10 03:56:00.783478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.838 malloc0 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.838 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.838 { 00:09:01.838 "params": { 00:09:01.838 "name": "Nvme$subsystem", 00:09:01.838 "trtype": "$TEST_TRANSPORT", 00:09:01.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.839 "adrfam": "ipv4", 00:09:01.839 "trsvcid": "$NVMF_PORT", 00:09:01.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.839 "hdgst": ${hdgst:-false}, 00:09:01.839 "ddgst": ${ddgst:-false} 00:09:01.839 }, 00:09:01.839 "method": "bdev_nvme_attach_controller" 00:09:01.839 } 00:09:01.839 EOF 00:09:01.839 )") 00:09:01.839 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:01.839 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:01.839 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:01.839 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.839 "params": { 00:09:01.839 "name": "Nvme1", 00:09:01.839 "trtype": "tcp", 00:09:01.839 "traddr": "10.0.0.2", 00:09:01.839 "adrfam": "ipv4", 00:09:01.839 "trsvcid": "4420", 00:09:01.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.839 "hdgst": false, 00:09:01.839 "ddgst": false 00:09:01.839 }, 00:09:01.839 "method": "bdev_nvme_attach_controller" 00:09:01.839 }' 00:09:01.839 [2024-12-10 03:56:00.868705] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:09:01.839 [2024-12-10 03:56:00.868748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128503 ] 00:09:01.839 [2024-12-10 03:56:00.942833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.839 [2024-12-10 03:56:00.982674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.096 Running I/O for 10 seconds... 00:09:04.006 8807.00 IOPS, 68.80 MiB/s [2024-12-10T02:56:04.227Z] 8870.50 IOPS, 69.30 MiB/s [2024-12-10T02:56:05.600Z] 8902.00 IOPS, 69.55 MiB/s [2024-12-10T02:56:06.533Z] 8901.75 IOPS, 69.54 MiB/s [2024-12-10T02:56:07.467Z] 8907.40 IOPS, 69.59 MiB/s [2024-12-10T02:56:08.401Z] 8916.50 IOPS, 69.66 MiB/s [2024-12-10T02:56:09.334Z] 8919.43 IOPS, 69.68 MiB/s [2024-12-10T02:56:10.268Z] 8924.25 IOPS, 69.72 MiB/s [2024-12-10T02:56:11.641Z] 8916.89 IOPS, 69.66 MiB/s [2024-12-10T02:56:11.641Z] 8910.70 IOPS, 69.61 MiB/s 00:09:12.355 Latency(us) 00:09:12.355 [2024-12-10T02:56:11.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.355 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:12.355 Verification LBA range: start 0x0 length 0x1000 00:09:12.355 Nvme1n1 : 10.01 8910.94 69.62 0.00 0.00 14323.06 247.71 23218.47 00:09:12.355 [2024-12-10T02:56:11.641Z] =================================================================================================================== 00:09:12.355 [2024-12-10T02:56:11.641Z] Total : 8910.94 69.62 0.00 0.00 14323.06 247.71 23218.47 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4130266 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.355 { 00:09:12.355 "params": { 00:09:12.355 "name": "Nvme$subsystem", 00:09:12.355 "trtype": "$TEST_TRANSPORT", 00:09:12.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.355 "adrfam": "ipv4", 00:09:12.355 "trsvcid": "$NVMF_PORT", 00:09:12.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.355 "hdgst": ${hdgst:-false}, 00:09:12.355 "ddgst": ${ddgst:-false} 00:09:12.355 }, 00:09:12.355 "method": "bdev_nvme_attach_controller" 00:09:12.355 } 00:09:12.355 EOF 00:09:12.355 )") 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:12.355 [2024-12-10 03:56:11.378654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.378692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:12.355 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.355 "params": { 00:09:12.355 "name": "Nvme1", 00:09:12.355 "trtype": "tcp", 00:09:12.355 "traddr": "10.0.0.2", 00:09:12.355 "adrfam": "ipv4", 00:09:12.355 "trsvcid": "4420", 00:09:12.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.355 "hdgst": false, 00:09:12.355 "ddgst": false 00:09:12.355 }, 00:09:12.355 "method": "bdev_nvme_attach_controller" 00:09:12.355 }' 00:09:12.355 [2024-12-10 03:56:11.390655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.390669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.402678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.402688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.414709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.414719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.419092] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:09:12.355 [2024-12-10 03:56:11.419134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130266 ] 00:09:12.355 [2024-12-10 03:56:11.426742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.426752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.438772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.438782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.450810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.450822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.462838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.462849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.474870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.474880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.486900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.486909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.492901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.355 [2024-12-10 03:56:11.498934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.498945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.510981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.510997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.522998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.523009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.532535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.355 [2024-12-10 03:56:11.535041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.535059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.547073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.547095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.559099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.559120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.571127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.571141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.583160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.583178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.595200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.595216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.607225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.607236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.619269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.619291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.355 [2024-12-10 03:56:11.631296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.355 [2024-12-10 03:56:11.631313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.613 [2024-12-10 03:56:11.643338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.613 [2024-12-10 03:56:11.643353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.613 [2024-12-10 03:56:11.655368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.613 [2024-12-10 03:56:11.655383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.613 [2024-12-10 03:56:11.667395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.667410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.720438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.720456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.731570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.731583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 Running I/O for 5 seconds... 00:09:12.614 [2024-12-10 03:56:11.742435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.742455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.751773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.751792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.766047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.766067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.779544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.779564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.788879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.788899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.802704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.802726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.816448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.816473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.829802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.829821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.843160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.843188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.851875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.851894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.860937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.860957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.875216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.875240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.614 [2024-12-10 03:56:11.884016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.614 [2024-12-10 03:56:11.884035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:11.898165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:11.898191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:11.911895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:11.911915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:11.925709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:11.925729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:11.939247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:11.939267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:11.952490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:11.952509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:11.965896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:11.965917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:11.979597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:11.979616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:11.993615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:11.993634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.006821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.006842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.020423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.020443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.034456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.034476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.048019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.048038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.061425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.061449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.074511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.074530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.088107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.088126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.101888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.101907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.115356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.115374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.128617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.128641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.141929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.141948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.872 [2024-12-10 03:56:12.150911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.872 [2024-12-10 03:56:12.150931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.130 [2024-12-10 03:56:12.165203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.130 [2024-12-10 03:56:12.165221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.130 [2024-12-10 03:56:12.174239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.130 [2024-12-10 03:56:12.174258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.130 [2024-12-10 03:56:12.183721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.130 [2024-12-10 03:56:12.183740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.130 [2024-12-10 03:56:12.192919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.130 [2024-12-10 03:56:12.192938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.130 [2024-12-10 03:56:12.202056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.130 [2024-12-10 03:56:12.202074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.130 [2024-12-10 03:56:12.216542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.130 [2024-12-10 03:56:12.216560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.230090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.230108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.238947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.238966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.253386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.253404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.262454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.262473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.276374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.276393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.290140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.290164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.303822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.303841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.317448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.317467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.331145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.331163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.344530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.344549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.353886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.353905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.367776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.367795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.376857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.376880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.386254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.386273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.131 [2024-12-10 03:56:12.400719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.131 [2024-12-10 03:56:12.400743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.414354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.414373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.427948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.427968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.441315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.441333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.454744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.454763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.468716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.468735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.482197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.482216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.495869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.495888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.509461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.509480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.522743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.522762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.536544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.536563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.550271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.550298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.564006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.564025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.577751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.577770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.591274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.591293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.604692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.604712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.618279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.618299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.631926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.631945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.645528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.645547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.389 [2024-12-10 03:56:12.659335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.389 [2024-12-10 03:56:12.659353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.647 [2024-12-10 03:56:12.672917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.647 [2024-12-10 03:56:12.672936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.647 [2024-12-10 03:56:12.681986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.647 [2024-12-10 03:56:12.682004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.647 [2024-12-10 03:56:12.691300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.647 [2024-12-10 03:56:12.691318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.647 [2024-12-10 03:56:12.705520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.647 [2024-12-10 03:56:12.705538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.647 [2024-12-10 03:56:12.714251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.647 [2024-12-10 03:56:12.714269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.647 [2024-12-10 03:56:12.728304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.647 [2024-12-10 03:56:12.728322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.647 17122.00 IOPS, 133.77 MiB/s [2024-12-10T02:56:12.933Z] [2024-12-10 03:56:12.737164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.647 [2024-12-10 03:56:12.737188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.647 [2024-12-10 03:56:12.751163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.647 [2024-12-10 03:56:12.751187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.764672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.764691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.778492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.778511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.791921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.791940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.805724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.805744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.814668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.814687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.823889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.823908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.833102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.833120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.847590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.847608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.861025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.861044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.874698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.874717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.888257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.888276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.902098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.902117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.916067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.916086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.648 [2024-12-10 03:56:12.924840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.648 [2024-12-10 03:56:12.924858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:12.934144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:12.934162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:12.943463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:12.943482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:12.957567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:12.957586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:12.971077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:12.971096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:12.984706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:12.984724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:12.998414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:12.998438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.007908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.007927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.017232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.017250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.032151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.032176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.042847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.042865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.056838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.056858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.070506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.070525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.083865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.083884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.097642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.097661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.106361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.106380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.120518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.120537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.134105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.134125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.142960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.142979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.156729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.156750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.170339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.170359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.906 [2024-12-10 03:56:13.184005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.906 [2024-12-10 03:56:13.184023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.193441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.193460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.207315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.207334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.220703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.220723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.234431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.234454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.244089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.244107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.258173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.258192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.272225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.272244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.285864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.285884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.295195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.295214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.309616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.309635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.323039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.323058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.336456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.336476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.350271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.350290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.363524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.363543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.377357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.377376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.390580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.164 [2024-12-10 03:56:13.390599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.164 [2024-12-10 03:56:13.403969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.165 [2024-12-10 03:56:13.403988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.165 [2024-12-10 03:56:13.417326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.165 [2024-12-10 03:56:13.417345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.165 [2024-12-10 03:56:13.431097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.165 [2024-12-10 03:56:13.431116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.165 [2024-12-10 03:56:13.444506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.165 [2024-12-10 03:56:13.444525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.422 [2024-12-10 03:56:13.457930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.422 [2024-12-10 03:56:13.457949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.422 [2024-12-10 03:56:13.470833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.422 [2024-12-10 03:56:13.470852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.422 [2024-12-10 03:56:13.484608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.422 [2024-12-10 03:56:13.484632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.422 [2024-12-10 03:56:13.498350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.422 [2024-12-10 03:56:13.498368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.422 [2024-12-10 03:56:13.511649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.422 [2024-12-10 03:56:13.511668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.422 [2024-12-10 03:56:13.520285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.422 [2024-12-10 03:56:13.520303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.422 [2024-12-10 03:56:13.529293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.422 [2024-12-10 03:56:13.529328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.422 [2024-12-10 03:56:13.543550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.422 [2024-12-10 03:56:13.543569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.422 [2024-12-10 03:56:13.557266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.422 [2024-12-10 03:56:13.557285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.423 [2024-12-10 03:56:13.565973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.423 [2024-12-10 03:56:13.565992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.423 [2024-12-10 03:56:13.580313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.423 [2024-12-10 03:56:13.580331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.423 [2024-12-10 03:56:13.593898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.423 [2024-12-10 03:56:13.593917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.423 [2024-12-10 03:56:13.607575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.423 [2024-12-10 03:56:13.607594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.423 [2024-12-10 03:56:13.616868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.423 [2024-12-10 03:56:13.616886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.423 [2024-12-10 03:56:13.630840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.423 [2024-12-10 03:56:13.630859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.423 [2024-12-10 03:56:13.644616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.423 [2024-12-10 03:56:13.644635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.423 [2024-12-10 03:56:13.658295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.423 [2024-12-10 03:56:13.658314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.423 [2024-12-10 03:56:13.671552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.423 [2024-12-10 03:56:13.671571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.423 [2024-12-10 03:56:13.685109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.423 [2024-12-10 03:56:13.685129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.423 [2024-12-10 03:56:13.698891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.423 [2024-12-10 03:56:13.698910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.712338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.712357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.721200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.721223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.735037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.735056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 17175.50 IOPS, 134.18 MiB/s [2024-12-10T02:56:13.967Z] [2024-12-10 03:56:13.748188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.748206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.762041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.762061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.775797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.775816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.784550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.784568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.799214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.799233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.812567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.812586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.826103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.826121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.839685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.839704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.853424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.853443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.867197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.867216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.880351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.880370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.894314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.894334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.907627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.907662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.916703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.916722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.930499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.930518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.943839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.943858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.681 [2024-12-10 03:56:13.957564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.681 [2024-12-10 03:56:13.957583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:13.971471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:13.971490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:13.985718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:13.985737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:13.999346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:13.999365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.012858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.012876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.026458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.026477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.039797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.039816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.053373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.053392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.066909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.066928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.075672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.075690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.085474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.085491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.099599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.099618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.113162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.113187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.127804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.127823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.138805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.138823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.152852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.152873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.167028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.167047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.180704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.180723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.194524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.194542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.208457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.208476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.939 [2024-12-10 03:56:14.222060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.939 [2024-12-10 03:56:14.222079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.197 [2024-12-10 03:56:14.235930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.197 [2024-12-10 03:56:14.235949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.197 [2024-12-10 03:56:14.245284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.197 [2024-12-10 03:56:14.245303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.197 [2024-12-10 03:56:14.259957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.197 [2024-12-10 03:56:14.259976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.197 [2024-12-10 03:56:14.269451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.197 [2024-12-10 03:56:14.269470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.283424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.283442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.296723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.296742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.310124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.310142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.323904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.323923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.337105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.337124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.350640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.350659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.364344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.364363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.377877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.377896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.391694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.391712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.405404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.405433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.419023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.419042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.432849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.432867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.445970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.445988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.459921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.459940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.198 [2024-12-10 03:56:14.473422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.198 [2024-12-10 03:56:14.473441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.456 [2024-12-10 03:56:14.487242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.456 [2024-12-10 03:56:14.487260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.456 [2024-12-10 03:56:14.500769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.456 [2024-12-10 03:56:14.500789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.456 [2024-12-10 03:56:14.510287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.456 [2024-12-10 03:56:14.510307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.456 [2024-12-10 03:56:14.524230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.456 [2024-12-10 03:56:14.524251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.456 [2024-12-10 03:56:14.537653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.456 [2024-12-10 03:56:14.537674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.456 [2024-12-10 03:56:14.551863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.456 [2024-12-10 03:56:14.551883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.456 [2024-12-10 03:56:14.562694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.456 [2024-12-10 03:56:14.562713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.456 [2024-12-10 03:56:14.577124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.456 [2024-12-10 03:56:14.577143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.456 [2024-12-10 03:56:14.590710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.456 [2024-12-10 03:56:14.590729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.457 [2024-12-10 03:56:14.604317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.457 [2024-12-10 03:56:14.604335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.457 [2024-12-10 03:56:14.617983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.457 [2024-12-10 03:56:14.618002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.457 [2024-12-10 03:56:14.627539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.457 [2024-12-10 03:56:14.627558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.457 [2024-12-10 03:56:14.637009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.457 [2024-12-10 03:56:14.637028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.457 [2024-12-10 03:56:14.651086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.457 [2024-12-10 03:56:14.651106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.457 [2024-12-10 03:56:14.664802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.457 [2024-12-10 03:56:14.664822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.457 [2024-12-10 03:56:14.678962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.457 [2024-12-10 03:56:14.678984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.457 [2024-12-10 03:56:14.689699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.457 [2024-12-10 03:56:14.689717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.457 [2024-12-10 03:56:14.704092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.457 [2024-12-10 03:56:14.704117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.457 [2024-12-10 03:56:14.717662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.457 [2024-12-10 03:56:14.717682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.457 [2024-12-10 03:56:14.731787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.457 [2024-12-10 03:56:14.731807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 17189.67 IOPS, 134.29 MiB/s [2024-12-10T02:56:15.001Z] [2024-12-10 03:56:14.745680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.745700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.759632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.759652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.773259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.773279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.787398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.787417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.800757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.800777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.814925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.814944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.828582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.828601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.841768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.841787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.855578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.855597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.869207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.869226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.883036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.883055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.897108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.897128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.908215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.908233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.922268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.922287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.935594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.935613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.949133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.949153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.962748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.962772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.976061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.976080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.715 [2024-12-10 03:56:14.989739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.715 [2024-12-10 03:56:14.989760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.003501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.973 [2024-12-10 03:56:15.003519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.016867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.973 [2024-12-10 03:56:15.016885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.030387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.973 [2024-12-10 03:56:15.030406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.043709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.973 [2024-12-10 03:56:15.043728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.057069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.973 [2024-12-10 03:56:15.057088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.065573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.973 [2024-12-10 03:56:15.065592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.074669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.973 [2024-12-10 03:56:15.074688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.084261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.973 [2024-12-10 03:56:15.084279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.098488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.973 [2024-12-10 03:56:15.098508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.112141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.973 [2024-12-10 03:56:15.112160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.125966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.973 [2024-12-10 03:56:15.125985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.973 [2024-12-10 03:56:15.139644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.974 [2024-12-10 03:56:15.139662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.974 [2024-12-10 03:56:15.153351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.974 [2024-12-10 03:56:15.153369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.974 [2024-12-10 03:56:15.166724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.974 [2024-12-10 03:56:15.166742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.974 [2024-12-10 03:56:15.180119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.974 [2024-12-10 03:56:15.180138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.974 [2024-12-10 03:56:15.193647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.974 [2024-12-10 03:56:15.193666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.974 [2024-12-10 03:56:15.207243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.974 [2024-12-10 03:56:15.207267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.974 [2024-12-10 03:56:15.220557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.974 [2024-12-10 03:56:15.220576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.974 [2024-12-10 03:56:15.234203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.974 [2024-12-10 03:56:15.234221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.974 [2024-12-10 03:56:15.248103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.974 [2024-12-10 03:56:15.248122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.257018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.257037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.270996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.271015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.284291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.284310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.297897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.297916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.311694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.311712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.325189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.325208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.339003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.339022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.352768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.352786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.366443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.366462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.379997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.380015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.393978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.393996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.407799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.407817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.421248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.421268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.430866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.430884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.444604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.444624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.458157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.458182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.466946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.466965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.481008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.481027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.494747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.494767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.231 [2024-12-10 03:56:15.508639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.231 [2024-12-10 03:56:15.508657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.489 [2024-12-10 03:56:15.521931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.489 [2024-12-10 03:56:15.521949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.489 [2024-12-10 03:56:15.536208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.489 [2024-12-10 03:56:15.536227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.489 [2024-12-10 03:56:15.546914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.489 [2024-12-10 03:56:15.546933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.489 [2024-12-10 03:56:15.556368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.489 [2024-12-10 03:56:15.556387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.489 [2024-12-10 03:56:15.570876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.489 [2024-12-10 03:56:15.570896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.489 [2024-12-10 03:56:15.584104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.489 [2024-12-10 03:56:15.584124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.489 [2024-12-10 03:56:15.598202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.489 [2024-12-10 03:56:15.598220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.489 [2024-12-10 03:56:15.611375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.489 [2024-12-10 03:56:15.611394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.489 [2024-12-10 03:56:15.625180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.489 [2024-12-10 03:56:15.625199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.489 [2024-12-10 03:56:15.638561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.489 [2024-12-10 03:56:15.638580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.489 [2024-12-10 03:56:15.652390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.489 [2024-12-10 03:56:15.652409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.490 [2024-12-10 03:56:15.665694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.490 [2024-12-10 03:56:15.665713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.490 [2024-12-10 03:56:15.674632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.490 [2024-12-10 03:56:15.674651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.490 [2024-12-10 03:56:15.688455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.490 [2024-12-10 03:56:15.688474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.490 [2024-12-10 03:56:15.702186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.490 [2024-12-10 03:56:15.702205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.490 [2024-12-10 03:56:15.715620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.490 [2024-12-10 03:56:15.715639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.490 [2024-12-10 03:56:15.729508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.490 [2024-12-10 03:56:15.729527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.490 [2024-12-10 03:56:15.738243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.490 [2024-12-10 03:56:15.738261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.490 17186.50 IOPS, 134.27 MiB/s [2024-12-10T02:56:15.776Z] [2024-12-10 03:56:15.747511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.490 [2024-12-10 03:56:15.747530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.490 [2024-12-10 03:56:15.761952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.490 [2024-12-10 03:56:15.761971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.775676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.775694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.789590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.789609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.803246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.803265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.816919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.816938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.830544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.830562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.843956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.843974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.857439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.857458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.870859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.870880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.880106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.880125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.893910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.893930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.907763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.907785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.921593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.921613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.934947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.934972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.948812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.948831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.962461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.962481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.975826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.975845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:15.989258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:15.989278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:16.003210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:16.003229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.748 [2024-12-10 03:56:16.017281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.748 [2024-12-10 03:56:16.017300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.030886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.030906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.045214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.045234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.058862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.058882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.072580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.072601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.086627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.086647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.100040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.100059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.113581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.113600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.126814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.126834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.140713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.140732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.149535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.149554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.163739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.163759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.177775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.177794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.191031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.191056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.200141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.200161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.213989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.006 [2024-12-10 03:56:16.214008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.006 [2024-12-10 03:56:16.228057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.007 [2024-12-10 03:56:16.228077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.007 [2024-12-10 03:56:16.242293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.007 [2024-12-10 03:56:16.242313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.007 [2024-12-10 03:56:16.252798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.007 [2024-12-10 03:56:16.252817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.007 [2024-12-10 03:56:16.262108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.007 [2024-12-10 03:56:16.262126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.007 [2024-12-10 03:56:16.276478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.007 [2024-12-10 03:56:16.276496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.290534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.290552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.304006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.304025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.317853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.317871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.331659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.331678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.345177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.345195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.358403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.358422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.371934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.371953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.385581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.385599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.399139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.399159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.413004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.413024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.426698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.426717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.440621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.440647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.453981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.454000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.467355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.467373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.476144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.476162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.490111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.490130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.503859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.503878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.517417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.517437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.531264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.531284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.265 [2024-12-10 03:56:16.540063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.265 [2024-12-10 03:56:16.540082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.554538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.554558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.563540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.563560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.578019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.578039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.591454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.591473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.605093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.605112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.618640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.618658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.632092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.632111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.645708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.645727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.659093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.659111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.672484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.672503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.686254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.686277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.699853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.699872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.713679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.713700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.727027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.727047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.740970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.740990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 17186.60 IOPS, 134.27 MiB/s [2024-12-10T02:56:16.810Z] [2024-12-10 03:56:16.751790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.751808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 00:09:17.524 Latency(us) 00:09:17.524 [2024-12-10T02:56:16.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.524 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:17.524 Nvme1n1 : 5.01 17188.04 134.28 0.00 0.00 7439.45 3432.84 15354.15 00:09:17.524 [2024-12-10T02:56:16.810Z] =================================================================================================================== 00:09:17.524 [2024-12-10T02:56:16.810Z] Total : 17188.04 134.28 0.00 0.00 7439.45 3432.84 15354.15 00:09:17.524 [2024-12-10 03:56:16.763065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.763081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.775101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.775116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.787137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.787158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.524 [2024-12-10 03:56:16.799164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.524 [2024-12-10 03:56:16.799184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.782 [2024-12-10 03:56:16.811194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.782 [2024-12-10 03:56:16.811209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.782 [2024-12-10 03:56:16.823230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.782 [2024-12-10 03:56:16.823247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.782 [2024-12-10 03:56:16.835261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.782 [2024-12-10 03:56:16.835277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.782 [2024-12-10 03:56:16.847288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.782 [2024-12-10 03:56:16.847302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.782 [2024-12-10 03:56:16.859325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.782 [2024-12-10 03:56:16.859343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.782 [2024-12-10 03:56:16.871345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.782 [2024-12-10 03:56:16.871355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.782 [2024-12-10 03:56:16.883383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.782 [2024-12-10 03:56:16.883398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.782 [2024-12-10 03:56:16.895410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.782 [2024-12-10 03:56:16.895422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.782 [2024-12-10 03:56:16.907442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.782 [2024-12-10 03:56:16.907453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4130266) - No such process 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4130266 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.782 delay0 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.782 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:18.040 [2024-12-10 03:56:17.098314] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:24.595 Initializing NVMe Controllers 00:09:24.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:24.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:24.595 Initialization complete. Launching workers. 00:09:24.595 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 868 00:09:24.595 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1155, failed to submit 33 00:09:24.595 success 967, unsuccessful 188, failed 0 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.595 rmmod nvme_tcp 00:09:24.595 rmmod nvme_fabrics 00:09:24.595 rmmod nvme_keyring 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4128459 ']' 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4128459 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 4128459 ']' 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 4128459 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4128459 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4128459' 00:09:24.595 killing process with pid 4128459 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 4128459 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 4128459 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:24.595 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.596 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.596 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.596 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.596 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.596 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.596 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.506 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.506 00:09:26.506 real 0m31.348s 00:09:26.506 user 0m41.837s 00:09:26.506 sys 0m11.140s 00:09:26.506 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.506 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.506 ************************************ 00:09:26.506 END TEST nvmf_zcopy 00:09:26.506 ************************************ 00:09:26.506 03:56:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.506 03:56:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.506 03:56:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.506 03:56:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.506 ************************************ 00:09:26.506 START TEST nvmf_nmic 00:09:26.506 ************************************ 00:09:26.506 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.766 * Looking for test storage... 00:09:26.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.766 --rc genhtml_branch_coverage=1 00:09:26.766 --rc genhtml_function_coverage=1 00:09:26.766 --rc genhtml_legend=1 00:09:26.766 --rc geninfo_all_blocks=1 00:09:26.766 --rc geninfo_unexecuted_blocks=1 00:09:26.766 00:09:26.766 ' 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.766 --rc genhtml_branch_coverage=1 00:09:26.766 --rc genhtml_function_coverage=1 00:09:26.766 --rc genhtml_legend=1 00:09:26.766 --rc geninfo_all_blocks=1 00:09:26.766 --rc geninfo_unexecuted_blocks=1 00:09:26.766 00:09:26.766 ' 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.766 --rc genhtml_branch_coverage=1 00:09:26.766 --rc genhtml_function_coverage=1 00:09:26.766 --rc genhtml_legend=1 00:09:26.766 --rc geninfo_all_blocks=1 00:09:26.766 --rc geninfo_unexecuted_blocks=1 00:09:26.766 00:09:26.766 ' 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.766 --rc genhtml_branch_coverage=1 00:09:26.766 --rc genhtml_function_coverage=1 00:09:26.766 --rc genhtml_legend=1 00:09:26.766 --rc geninfo_all_blocks=1 00:09:26.766 --rc geninfo_unexecuted_blocks=1 00:09:26.766 00:09:26.766 ' 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:26.766 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.767 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.337 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:33.338 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:33.338 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:33.338 Found net devices under 0000:af:00.0: cvl_0_0 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:33.338 Found net devices under 0000:af:00.1: cvl_0_1 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:09:33.338 00:09:33.338 --- 10.0.0.2 ping statistics --- 00:09:33.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.338 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:09:33.338 00:09:33.338 --- 10.0.0.1 ping statistics --- 00:09:33.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.338 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4135748 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4135748 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 4135748 ']' 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.338 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.338 [2024-12-10 03:56:32.027763] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:09:33.338 [2024-12-10 03:56:32.027814] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.338 [2024-12-10 03:56:32.106217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.338 [2024-12-10 03:56:32.147325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.338 [2024-12-10 03:56:32.147364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.338 [2024-12-10 03:56:32.147372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.338 [2024-12-10 03:56:32.147377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.338 [2024-12-10 03:56:32.147382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.338 [2024-12-10 03:56:32.148694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.339 [2024-12-10 03:56:32.148801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.339 [2024-12-10 03:56:32.148887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.339 [2024-12-10 03:56:32.148888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 [2024-12-10 03:56:32.298186] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 Malloc0 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 [2024-12-10 03:56:32.364960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:33.339 test case1: single bdev can't be used in multiple subsystems 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 [2024-12-10 03:56:32.388844] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:33.339 [2024-12-10 03:56:32.388864] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:33.339 [2024-12-10 03:56:32.388871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.339 request: 00:09:33.339 { 00:09:33.339 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:33.339 "namespace": { 00:09:33.339 "bdev_name": "Malloc0", 00:09:33.339 "no_auto_visible": false, 00:09:33.339 "hide_metadata": false 00:09:33.339 }, 00:09:33.339 "method": "nvmf_subsystem_add_ns", 00:09:33.339 "req_id": 1 00:09:33.339 } 00:09:33.339 Got JSON-RPC error response 00:09:33.339 response: 00:09:33.339 { 00:09:33.339 "code": -32602, 00:09:33.339 "message": "Invalid parameters" 00:09:33.339 } 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:33.339 Adding namespace failed - expected result. 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:33.339 test case2: host connect to nvmf target in multiple paths 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 [2024-12-10 03:56:32.400964] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.339 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:34.272 03:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:35.644 03:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:35.644 03:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:35.644 03:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:35.644 03:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:35.644 03:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:37.541 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:37.541 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:37.541 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.541 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:37.541 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.541 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:37.541 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:37.541 [global] 00:09:37.541 thread=1 00:09:37.541 invalidate=1 00:09:37.541 rw=write 00:09:37.541 time_based=1 00:09:37.541 runtime=1 00:09:37.541 ioengine=libaio 00:09:37.541 direct=1 00:09:37.541 bs=4096 00:09:37.541 iodepth=1 00:09:37.541 norandommap=0 00:09:37.541 numjobs=1 00:09:37.541 00:09:37.541 verify_dump=1 00:09:37.541 verify_backlog=512 00:09:37.541 verify_state_save=0 00:09:37.541 do_verify=1 00:09:37.541 verify=crc32c-intel 00:09:37.541 [job0] 00:09:37.541 filename=/dev/nvme0n1 00:09:37.541 Could not set queue depth (nvme0n1) 00:09:37.798 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.798 fio-3.35 00:09:37.798 Starting 1 thread 00:09:39.172 00:09:39.172 job0: (groupid=0, jobs=1): err= 0: pid=4136802: Tue Dec 10 03:56:38 2024 00:09:39.172 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:09:39.172 slat (nsec): min=9812, max=24891, avg=23171.86, stdev=3013.37 00:09:39.172 clat (usec): min=40602, max=41017, avg=40948.65, stdev=87.61 00:09:39.172 lat (usec): min=40612, max=41041, avg=40971.82, stdev=90.31 00:09:39.172 clat percentiles (usec): 00:09:39.172 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:39.172 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:39.172 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:39.172 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:39.172 | 99.99th=[41157] 00:09:39.172 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:39.172 slat (usec): min=9, max=26735, avg=63.27, stdev=1181.09 00:09:39.172 clat (usec): min=111, max=447, avg=132.06, stdev=21.89 00:09:39.172 lat (usec): min=122, max=27110, avg=195.33, stdev=1191.99 00:09:39.172 clat percentiles (usec): 00:09:39.172 | 1.00th=[ 118], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 124], 00:09:39.172 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 129], 00:09:39.172 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 147], 95.00th=[ 169], 00:09:39.172 | 99.00th=[ 180], 99.50th=[ 198], 99.90th=[ 449], 99.95th=[ 449], 00:09:39.172 | 99.99th=[ 449] 00:09:39.172 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.172 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.172 lat (usec) : 250=95.51%, 500=0.37% 00:09:39.172 lat (msec) : 50=4.12% 00:09:39.172 cpu : usr=0.40%, sys=0.40%, ctx=537, majf=0, minf=1 00:09:39.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.172 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.172 00:09:39.172 Run status group 0 (all jobs): 00:09:39.172 READ: bw=87.6KiB/s (89.8kB/s), 87.6KiB/s-87.6KiB/s (89.8kB/s-89.8kB/s), io=88.0KiB (90.1kB), run=1004-1004msec 00:09:39.172 WRITE: bw=2040KiB/s (2089kB/s), 2040KiB/s-2040KiB/s (2089kB/s-2089kB/s), io=2048KiB (2097kB), run=1004-1004msec 00:09:39.172 00:09:39.172 Disk stats (read/write): 00:09:39.172 nvme0n1: ios=45/512, merge=0/0, ticks=1764/62, in_queue=1826, util=98.50% 00:09:39.172 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:39.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:39.172 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:39.172 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:39.172 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:39.172 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.172 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.173 rmmod nvme_tcp 00:09:39.173 rmmod nvme_fabrics 00:09:39.173 rmmod nvme_keyring 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4135748 ']' 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4135748 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 4135748 ']' 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 4135748 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4135748 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4135748' 00:09:39.173 killing process with pid 4135748 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 4135748 00:09:39.173 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 4135748 00:09:39.431 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:39.431 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:39.431 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:39.432 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:39.432 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:39.432 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:39.432 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:39.432 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:39.432 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:39.432 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.432 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.432 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:41.966 00:09:41.966 real 0m14.927s 00:09:41.966 user 0m33.073s 00:09:41.966 sys 0m5.161s 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:41.966 ************************************ 00:09:41.966 END TEST nvmf_nmic 00:09:41.966 ************************************ 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.966 ************************************ 00:09:41.966 START TEST nvmf_fio_target 00:09:41.966 ************************************ 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:41.966 * Looking for test storage... 00:09:41.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:41.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.966 --rc genhtml_branch_coverage=1 00:09:41.966 --rc genhtml_function_coverage=1 00:09:41.966 --rc genhtml_legend=1 00:09:41.966 --rc geninfo_all_blocks=1 00:09:41.966 --rc geninfo_unexecuted_blocks=1 00:09:41.966 00:09:41.966 ' 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:41.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.966 --rc genhtml_branch_coverage=1 00:09:41.966 --rc genhtml_function_coverage=1 00:09:41.966 --rc genhtml_legend=1 00:09:41.966 --rc geninfo_all_blocks=1 00:09:41.966 --rc geninfo_unexecuted_blocks=1 00:09:41.966 00:09:41.966 ' 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:41.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.966 --rc genhtml_branch_coverage=1 00:09:41.966 --rc genhtml_function_coverage=1 00:09:41.966 --rc genhtml_legend=1 00:09:41.966 --rc geninfo_all_blocks=1 00:09:41.966 --rc geninfo_unexecuted_blocks=1 00:09:41.966 00:09:41.966 ' 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:41.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.966 --rc genhtml_branch_coverage=1 00:09:41.966 --rc genhtml_function_coverage=1 00:09:41.966 --rc genhtml_legend=1 00:09:41.966 --rc geninfo_all_blocks=1 00:09:41.966 --rc geninfo_unexecuted_blocks=1 00:09:41.966 00:09:41.966 ' 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.966 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:41.967 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:48.533 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:48.533 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:48.533 Found net devices under 0000:af:00.0: cvl_0_0 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:48.533 Found net devices under 0000:af:00.1: cvl_0_1 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:48.533 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:48.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:09:48.534 00:09:48.534 --- 10.0.0.2 ping statistics --- 00:09:48.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.534 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:48.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:09:48.534 00:09:48.534 --- 10.0.0.1 ping statistics --- 00:09:48.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.534 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4140499 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4140499 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 4140499 ']' 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.534 03:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.534 [2024-12-10 03:56:47.009848] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:09:48.534 [2024-12-10 03:56:47.009889] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.534 [2024-12-10 03:56:47.085250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.534 [2024-12-10 03:56:47.124053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.534 [2024-12-10 03:56:47.124090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.534 [2024-12-10 03:56:47.124098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.534 [2024-12-10 03:56:47.124104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.534 [2024-12-10 03:56:47.124109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.534 [2024-12-10 03:56:47.125602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.534 [2024-12-10 03:56:47.125707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.534 [2024-12-10 03:56:47.125782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.534 [2024-12-10 03:56:47.125782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.534 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.534 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:48.534 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:48.534 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:48.534 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.534 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.534 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:48.534 [2024-12-10 03:56:47.431597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.534 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.534 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:48.534 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.792 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:48.792 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.050 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:49.050 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.050 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:49.050 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:49.308 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.566 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:49.566 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.823 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:49.823 03:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.081 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:50.081 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:50.081 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:50.339 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:50.339 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:50.596 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:50.596 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:50.854 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.854 [2024-12-10 03:56:50.105917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.854 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:51.111 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:51.368 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:52.742 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:52.742 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:52.742 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.742 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:52.742 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:52.742 03:56:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:54.641 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:54.641 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:54.641 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:54.641 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:54.641 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:54.641 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:54.641 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:54.641 [global] 00:09:54.641 thread=1 00:09:54.641 invalidate=1 00:09:54.641 rw=write 00:09:54.641 time_based=1 00:09:54.641 runtime=1 00:09:54.641 ioengine=libaio 00:09:54.641 direct=1 00:09:54.641 bs=4096 00:09:54.641 iodepth=1 00:09:54.641 norandommap=0 00:09:54.641 numjobs=1 00:09:54.641 00:09:54.641 verify_dump=1 00:09:54.641 verify_backlog=512 00:09:54.641 verify_state_save=0 00:09:54.641 do_verify=1 00:09:54.641 verify=crc32c-intel 00:09:54.641 [job0] 00:09:54.641 filename=/dev/nvme0n1 00:09:54.641 [job1] 00:09:54.641 filename=/dev/nvme0n2 00:09:54.641 [job2] 00:09:54.641 filename=/dev/nvme0n3 00:09:54.641 [job3] 00:09:54.641 filename=/dev/nvme0n4 00:09:54.641 Could not set queue depth (nvme0n1) 00:09:54.641 Could not set queue depth (nvme0n2) 00:09:54.641 Could not set queue depth (nvme0n3) 00:09:54.641 Could not set queue depth (nvme0n4) 00:09:54.899 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.899 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.899 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.899 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.899 fio-3.35 00:09:54.899 Starting 4 threads 00:09:56.272 00:09:56.272 job0: (groupid=0, jobs=1): err= 0: pid=4141821: Tue Dec 10 03:56:55 2024 00:09:56.272 read: IOPS=1247, BW=4988KiB/s (5108kB/s)(5048KiB/1012msec) 00:09:56.272 slat (nsec): min=6644, max=25990, avg=7796.95, stdev=1779.76 00:09:56.272 clat (usec): min=173, max=41981, avg=569.92, stdev=3622.74 00:09:56.272 lat (usec): min=181, max=41988, avg=577.71, stdev=3622.99 00:09:56.272 clat percentiles (usec): 00:09:56.272 | 1.00th=[ 182], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 223], 00:09:56.272 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:09:56.272 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:09:56.272 | 99.00th=[ 474], 99.50th=[40633], 99.90th=[41681], 99.95th=[42206], 00:09:56.272 | 99.99th=[42206] 00:09:56.272 write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets 00:09:56.272 slat (usec): min=9, max=20669, avg=24.39, stdev=527.11 00:09:56.272 clat (usec): min=110, max=359, avg=155.06, stdev=21.24 00:09:56.272 lat (usec): min=120, max=20988, avg=179.45, stdev=531.72 00:09:56.272 clat percentiles (usec): 00:09:56.272 | 1.00th=[ 121], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 137], 00:09:56.272 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 159], 00:09:56.272 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 190], 00:09:56.272 | 99.00th=[ 212], 99.50th=[ 225], 99.90th=[ 322], 99.95th=[ 359], 00:09:56.272 | 99.99th=[ 359] 00:09:56.272 bw ( KiB/s): min= 5288, max= 7000, per=20.84%, avg=6144.00, stdev=1210.57, samples=2 00:09:56.272 iops : min= 1322, max= 1750, avg=1536.00, stdev=302.64, samples=2 00:09:56.272 lat (usec) : 250=80.70%, 500=18.91%, 750=0.04% 00:09:56.272 lat (msec) : 50=0.36% 00:09:56.272 cpu : usr=1.58%, sys=2.47%, ctx=2802, majf=0, minf=2 00:09:56.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.272 issued rwts: total=1262,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.272 job1: (groupid=0, jobs=1): err= 0: pid=4141822: Tue Dec 10 03:56:55 2024 00:09:56.272 read: IOPS=2002, BW=8012KiB/s (8204kB/s)(8020KiB/1001msec) 00:09:56.272 slat (nsec): min=6958, max=42508, avg=8006.62, stdev=1395.96 00:09:56.272 clat (usec): min=183, max=40819, avg=307.67, stdev=909.02 00:09:56.272 lat (usec): min=191, max=40826, avg=315.68, stdev=909.02 00:09:56.272 clat percentiles (usec): 00:09:56.272 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 225], 00:09:56.272 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 262], 00:09:56.272 | 70.00th=[ 293], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 449], 00:09:56.272 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 545], 99.95th=[ 545], 00:09:56.272 | 99.99th=[40633] 00:09:56.272 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:56.272 slat (nsec): min=9919, max=40479, avg=11182.89, stdev=1464.87 00:09:56.272 clat (usec): min=121, max=337, avg=162.23, stdev=18.41 00:09:56.272 lat (usec): min=132, max=377, avg=173.42, stdev=18.59 00:09:56.272 clat percentiles (usec): 00:09:56.272 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:09:56.272 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:09:56.272 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 198], 00:09:56.272 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 253], 99.95th=[ 265], 00:09:56.272 | 99.99th=[ 338] 00:09:56.272 bw ( KiB/s): min= 8192, max= 8192, per=27.78%, avg=8192.00, stdev= 0.00, samples=1 00:09:56.272 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:56.272 lat (usec) : 250=76.07%, 500=23.56%, 750=0.35% 00:09:56.272 lat (msec) : 50=0.02% 00:09:56.272 cpu : usr=3.10%, sys=6.60%, ctx=4053, majf=0, minf=2 00:09:56.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.272 issued rwts: total=2005,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.272 job2: (groupid=0, jobs=1): err= 0: pid=4141823: Tue Dec 10 03:56:55 2024 00:09:56.272 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:56.272 slat (nsec): min=6817, max=25917, avg=8165.99, stdev=1625.88 00:09:56.272 clat (usec): min=175, max=41032, avg=423.06, stdev=2748.03 00:09:56.272 lat (usec): min=183, max=41043, avg=431.23, stdev=2748.75 00:09:56.272 clat percentiles (usec): 00:09:56.272 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:09:56.272 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 231], 00:09:56.272 | 70.00th=[ 243], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 297], 00:09:56.272 | 99.00th=[ 412], 99.50th=[ 7111], 99.90th=[41157], 99.95th=[41157], 00:09:56.272 | 99.99th=[41157] 00:09:56.272 write: IOPS=1826, BW=7305KiB/s (7480kB/s)(7312KiB/1001msec); 0 zone resets 00:09:56.272 slat (nsec): min=9919, max=40358, avg=11630.98, stdev=2039.94 00:09:56.272 clat (usec): min=121, max=352, avg=168.72, stdev=24.52 00:09:56.272 lat (usec): min=132, max=385, avg=180.35, stdev=25.14 00:09:56.272 clat percentiles (usec): 00:09:56.272 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:09:56.272 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:09:56.272 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 212], 00:09:56.272 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 310], 99.95th=[ 355], 00:09:56.272 | 99.99th=[ 355] 00:09:56.272 bw ( KiB/s): min= 4096, max= 4096, per=13.89%, avg=4096.00, stdev= 0.00, samples=1 00:09:56.272 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:56.272 lat (usec) : 250=88.32%, 500=11.44% 00:09:56.272 lat (msec) : 10=0.03%, 50=0.21% 00:09:56.273 cpu : usr=1.60%, sys=3.60%, ctx=3366, majf=0, minf=1 00:09:56.273 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.273 issued rwts: total=1536,1828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.273 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.273 job3: (groupid=0, jobs=1): err= 0: pid=4141824: Tue Dec 10 03:56:55 2024 00:09:56.273 read: IOPS=1871, BW=7485KiB/s (7664kB/s)(7492KiB/1001msec) 00:09:56.273 slat (nsec): min=7123, max=38223, avg=8223.97, stdev=1491.92 00:09:56.273 clat (usec): min=179, max=40894, avg=323.76, stdev=1865.28 00:09:56.273 lat (usec): min=187, max=40904, avg=331.98, stdev=1865.29 00:09:56.273 clat percentiles (usec): 00:09:56.273 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 223], 00:09:56.273 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:09:56.273 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 265], 00:09:56.273 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[40633], 99.95th=[40633], 00:09:56.273 | 99.99th=[40633] 00:09:56.273 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:56.273 slat (nsec): min=10090, max=43201, avg=11486.62, stdev=2006.27 00:09:56.273 clat (usec): min=128, max=330, avg=167.59, stdev=18.47 00:09:56.273 lat (usec): min=139, max=369, avg=179.08, stdev=18.80 00:09:56.273 clat percentiles (usec): 00:09:56.273 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 153], 00:09:56.273 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:09:56.273 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 202], 00:09:56.273 | 99.00th=[ 217], 99.50th=[ 223], 99.90th=[ 233], 99.95th=[ 233], 00:09:56.273 | 99.99th=[ 330] 00:09:56.273 bw ( KiB/s): min= 8192, max= 8192, per=27.78%, avg=8192.00, stdev= 0.00, samples=1 00:09:56.273 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:56.273 lat (usec) : 250=88.88%, 500=11.02% 00:09:56.273 lat (msec) : 50=0.10% 00:09:56.273 cpu : usr=3.90%, sys=5.50%, ctx=3921, majf=0, minf=1 00:09:56.273 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.273 issued rwts: total=1873,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.273 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.273 00:09:56.273 Run status group 0 (all jobs): 00:09:56.273 READ: bw=25.8MiB/s (27.0MB/s), 4988KiB/s-8012KiB/s (5108kB/s-8204kB/s), io=26.1MiB (27.3MB), run=1001-1012msec 00:09:56.273 WRITE: bw=28.8MiB/s (30.2MB/s), 6071KiB/s-8184KiB/s (6217kB/s-8380kB/s), io=29.1MiB (30.6MB), run=1001-1012msec 00:09:56.273 00:09:56.273 Disk stats (read/write): 00:09:56.273 nvme0n1: ios=1159/1536, merge=0/0, ticks=1538/236, in_queue=1774, util=97.79% 00:09:56.273 nvme0n2: ios=1536/1906, merge=0/0, ticks=485/286, in_queue=771, util=86.76% 00:09:56.273 nvme0n3: ios=1127/1536, merge=0/0, ticks=1538/253, in_queue=1791, util=98.33% 00:09:56.273 nvme0n4: ios=1536/1688, merge=0/0, ticks=510/260, in_queue=770, util=89.68% 00:09:56.273 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:56.273 [global] 00:09:56.273 thread=1 00:09:56.273 invalidate=1 00:09:56.273 rw=randwrite 00:09:56.273 time_based=1 00:09:56.273 runtime=1 00:09:56.273 ioengine=libaio 00:09:56.273 direct=1 00:09:56.273 bs=4096 00:09:56.273 iodepth=1 00:09:56.273 norandommap=0 00:09:56.273 numjobs=1 00:09:56.273 00:09:56.273 verify_dump=1 00:09:56.273 verify_backlog=512 00:09:56.273 verify_state_save=0 00:09:56.273 do_verify=1 00:09:56.273 verify=crc32c-intel 00:09:56.273 [job0] 00:09:56.273 filename=/dev/nvme0n1 00:09:56.273 [job1] 00:09:56.273 filename=/dev/nvme0n2 00:09:56.273 [job2] 00:09:56.273 filename=/dev/nvme0n3 00:09:56.273 [job3] 00:09:56.273 filename=/dev/nvme0n4 00:09:56.273 Could not set queue depth (nvme0n1) 00:09:56.273 Could not set queue depth (nvme0n2) 00:09:56.273 Could not set queue depth (nvme0n3) 00:09:56.273 Could not set queue depth (nvme0n4) 00:09:56.530 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.530 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.530 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.530 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.530 fio-3.35 00:09:56.530 Starting 4 threads 00:09:57.922 00:09:57.922 job0: (groupid=0, jobs=1): err= 0: pid=4142186: Tue Dec 10 03:56:56 2024 00:09:57.922 read: IOPS=22, BW=90.9KiB/s (93.1kB/s)(92.0KiB/1012msec) 00:09:57.922 slat (nsec): min=8982, max=28144, avg=21770.91, stdev=4414.44 00:09:57.922 clat (usec): min=262, max=42013, avg=39209.03, stdev=8494.36 00:09:57.922 lat (usec): min=290, max=42036, avg=39230.80, stdev=8493.04 00:09:57.922 clat percentiles (usec): 00:09:57.922 | 1.00th=[ 265], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:57.922 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:57.922 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:57.922 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:57.922 | 99.99th=[42206] 00:09:57.922 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:09:57.922 slat (nsec): min=9946, max=35897, avg=11753.73, stdev=2181.11 00:09:57.922 clat (usec): min=133, max=259, avg=198.46, stdev=39.55 00:09:57.922 lat (usec): min=144, max=295, avg=210.21, stdev=39.37 00:09:57.922 clat percentiles (usec): 00:09:57.922 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:09:57.922 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 182], 60.00th=[ 239], 00:09:57.922 | 70.00th=[ 241], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 245], 00:09:57.922 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 260], 99.95th=[ 260], 00:09:57.922 | 99.99th=[ 260] 00:09:57.922 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.922 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.922 lat (usec) : 250=94.77%, 500=1.12% 00:09:57.922 lat (msec) : 50=4.11% 00:09:57.922 cpu : usr=0.10%, sys=0.99%, ctx=538, majf=0, minf=1 00:09:57.922 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.923 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.923 job1: (groupid=0, jobs=1): err= 0: pid=4142187: Tue Dec 10 03:56:56 2024 00:09:57.923 read: IOPS=335, BW=1343KiB/s (1375kB/s)(1352KiB/1007msec) 00:09:57.923 slat (nsec): min=6726, max=24079, avg=8479.67, stdev=3727.04 00:09:57.923 clat (usec): min=185, max=41747, avg=2660.63, stdev=9596.91 00:09:57.923 lat (usec): min=193, max=41757, avg=2669.11, stdev=9598.19 00:09:57.923 clat percentiles (usec): 00:09:57.923 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 223], 20.00th=[ 233], 00:09:57.923 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:09:57.923 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 429], 95.00th=[40633], 00:09:57.923 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:57.923 | 99.99th=[41681] 00:09:57.923 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:09:57.923 slat (nsec): min=9654, max=37061, avg=10814.48, stdev=1661.86 00:09:57.923 clat (usec): min=137, max=295, avg=188.49, stdev=23.27 00:09:57.923 lat (usec): min=147, max=332, avg=199.31, stdev=23.60 00:09:57.923 clat percentiles (usec): 00:09:57.923 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 159], 20.00th=[ 169], 00:09:57.923 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:09:57.923 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 229], 00:09:57.923 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 297], 99.95th=[ 297], 00:09:57.923 | 99.99th=[ 297] 00:09:57.923 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.923 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.923 lat (usec) : 250=81.88%, 500=15.18%, 750=0.59% 00:09:57.923 lat (msec) : 50=2.35% 00:09:57.923 cpu : usr=0.30%, sys=0.89%, ctx=853, majf=0, minf=1 00:09:57.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.923 issued rwts: total=338,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.923 job2: (groupid=0, jobs=1): err= 0: pid=4142188: Tue Dec 10 03:56:56 2024 00:09:57.923 read: IOPS=1360, BW=5441KiB/s (5571kB/s)(5604KiB/1030msec) 00:09:57.923 slat (nsec): min=7619, max=43742, avg=8657.31, stdev=2188.51 00:09:57.923 clat (usec): min=163, max=41087, avg=529.05, stdev=3597.71 00:09:57.923 lat (usec): min=171, max=41109, avg=537.70, stdev=3598.84 00:09:57.923 clat percentiles (usec): 00:09:57.923 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 186], 00:09:57.923 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:09:57.923 | 70.00th=[ 221], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 265], 00:09:57.923 | 99.00th=[ 408], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:57.923 | 99.99th=[41157] 00:09:57.923 write: IOPS=1491, BW=5965KiB/s (6108kB/s)(6144KiB/1030msec); 0 zone resets 00:09:57.923 slat (nsec): min=10736, max=39339, avg=11972.65, stdev=1903.78 00:09:57.923 clat (usec): min=120, max=307, avg=161.17, stdev=36.91 00:09:57.923 lat (usec): min=132, max=343, avg=173.14, stdev=37.41 00:09:57.923 clat percentiles (usec): 00:09:57.923 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 135], 00:09:57.923 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 153], 00:09:57.923 | 70.00th=[ 169], 80.00th=[ 190], 90.00th=[ 217], 95.00th=[ 241], 00:09:57.923 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 306], 99.95th=[ 310], 00:09:57.923 | 99.99th=[ 310] 00:09:57.923 bw ( KiB/s): min= 184, max=12104, per=51.50%, avg=6144.00, stdev=8428.71, samples=2 00:09:57.923 iops : min= 46, max= 3026, avg=1536.00, stdev=2107.18, samples=2 00:09:57.923 lat (usec) : 250=91.15%, 500=8.48% 00:09:57.923 lat (msec) : 50=0.37% 00:09:57.923 cpu : usr=2.24%, sys=4.96%, ctx=2937, majf=0, minf=1 00:09:57.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.923 issued rwts: total=1401,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.923 job3: (groupid=0, jobs=1): err= 0: pid=4142189: Tue Dec 10 03:56:56 2024 00:09:57.923 read: IOPS=44, BW=179KiB/s (183kB/s)(180KiB/1008msec) 00:09:57.923 slat (nsec): min=7287, max=27600, avg=14455.44, stdev=6946.22 00:09:57.923 clat (usec): min=224, max=41149, avg=20151.22, stdev=20581.38 00:09:57.923 lat (usec): min=236, max=41161, avg=20165.67, stdev=20586.17 00:09:57.923 clat percentiles (usec): 00:09:57.923 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:09:57.923 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 289], 60.00th=[40633], 00:09:57.923 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:57.923 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:57.923 | 99.99th=[41157] 00:09:57.923 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:09:57.923 slat (nsec): min=10054, max=38498, avg=11549.21, stdev=2150.31 00:09:57.923 clat (usec): min=138, max=389, avg=180.57, stdev=29.94 00:09:57.923 lat (usec): min=149, max=427, avg=192.12, stdev=30.89 00:09:57.923 clat percentiles (usec): 00:09:57.923 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:57.923 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 180], 00:09:57.923 | 70.00th=[ 190], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 233], 00:09:57.923 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 392], 99.95th=[ 392], 00:09:57.923 | 99.99th=[ 392] 00:09:57.923 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.923 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.923 lat (usec) : 250=93.72%, 500=2.33% 00:09:57.923 lat (msec) : 50=3.95% 00:09:57.923 cpu : usr=0.60%, sys=0.70%, ctx=558, majf=0, minf=2 00:09:57.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.923 issued rwts: total=45,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.923 00:09:57.923 Run status group 0 (all jobs): 00:09:57.923 READ: bw=7017KiB/s (7186kB/s), 90.9KiB/s-5441KiB/s (93.1kB/s-5571kB/s), io=7228KiB (7401kB), run=1007-1030msec 00:09:57.923 WRITE: bw=11.7MiB/s (12.2MB/s), 2024KiB/s-5965KiB/s (2072kB/s-6108kB/s), io=12.0MiB (12.6MB), run=1007-1030msec 00:09:57.923 00:09:57.923 Disk stats (read/write): 00:09:57.923 nvme0n1: ios=45/512, merge=0/0, ticks=1724/92, in_queue=1816, util=98.10% 00:09:57.923 nvme0n2: ios=382/512, merge=0/0, ticks=1006/90, in_queue=1096, util=98.37% 00:09:57.923 nvme0n3: ios=1396/1536, merge=0/0, ticks=524/229, in_queue=753, util=88.96% 00:09:57.923 nvme0n4: ios=40/512, merge=0/0, ticks=743/90, in_queue=833, util=89.72% 00:09:57.923 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:57.923 [global] 00:09:57.923 thread=1 00:09:57.923 invalidate=1 00:09:57.923 rw=write 00:09:57.923 time_based=1 00:09:57.923 runtime=1 00:09:57.923 ioengine=libaio 00:09:57.923 direct=1 00:09:57.923 bs=4096 00:09:57.923 iodepth=128 00:09:57.923 norandommap=0 00:09:57.923 numjobs=1 00:09:57.923 00:09:57.923 verify_dump=1 00:09:57.923 verify_backlog=512 00:09:57.923 verify_state_save=0 00:09:57.923 do_verify=1 00:09:57.923 verify=crc32c-intel 00:09:57.923 [job0] 00:09:57.923 filename=/dev/nvme0n1 00:09:57.923 [job1] 00:09:57.923 filename=/dev/nvme0n2 00:09:57.923 [job2] 00:09:57.923 filename=/dev/nvme0n3 00:09:57.923 [job3] 00:09:57.923 filename=/dev/nvme0n4 00:09:57.923 Could not set queue depth (nvme0n1) 00:09:57.923 Could not set queue depth (nvme0n2) 00:09:57.923 Could not set queue depth (nvme0n3) 00:09:57.923 Could not set queue depth (nvme0n4) 00:09:58.183 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.183 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.183 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.183 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.183 fio-3.35 00:09:58.183 Starting 4 threads 00:09:59.609 00:09:59.609 job0: (groupid=0, jobs=1): err= 0: pid=4142558: Tue Dec 10 03:56:58 2024 00:09:59.609 read: IOPS=3550, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:09:59.609 slat (nsec): min=1311, max=10651k, avg=107537.34, stdev=754764.88 00:09:59.609 clat (usec): min=3886, max=30924, avg=13566.66, stdev=4049.85 00:09:59.609 lat (usec): min=3899, max=30928, avg=13674.20, stdev=4097.90 00:09:59.609 clat percentiles (usec): 00:09:59.609 | 1.00th=[ 7046], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[11076], 00:09:59.609 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12911], 00:09:59.609 | 70.00th=[14484], 80.00th=[15401], 90.00th=[19006], 95.00th=[22152], 00:09:59.609 | 99.00th=[28705], 99.50th=[29492], 99.90th=[30802], 99.95th=[30802], 00:09:59.609 | 99.99th=[30802] 00:09:59.609 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:09:59.609 slat (usec): min=2, max=9122, avg=135.73, stdev=583.70 00:09:59.609 clat (usec): min=256, max=56606, avg=19410.82, stdev=11641.45 00:09:59.609 lat (usec): min=550, max=56618, avg=19546.55, stdev=11717.93 00:09:59.609 clat percentiles (usec): 00:09:59.609 | 1.00th=[ 840], 5.00th=[ 5997], 10.00th=[ 8979], 20.00th=[10421], 00:09:59.609 | 30.00th=[11731], 40.00th=[11994], 50.00th=[16057], 60.00th=[21103], 00:09:59.609 | 70.00th=[21627], 80.00th=[27919], 90.00th=[39060], 95.00th=[43254], 00:09:59.609 | 99.00th=[50070], 99.50th=[52167], 99.90th=[56361], 99.95th=[56361], 00:09:59.609 | 99.99th=[56361] 00:09:59.609 bw ( KiB/s): min=11776, max=20016, per=20.39%, avg=15896.00, stdev=5826.56, samples=2 00:09:59.609 iops : min= 2944, max= 5004, avg=3974.00, stdev=1456.64, samples=2 00:09:59.609 lat (usec) : 500=0.01%, 750=0.05%, 1000=1.00% 00:09:59.609 lat (msec) : 2=0.05%, 4=0.43%, 10=11.80%, 20=58.14%, 50=27.92% 00:09:59.609 lat (msec) : 100=0.59% 00:09:59.609 cpu : usr=3.56%, sys=3.66%, ctx=520, majf=0, minf=1 00:09:59.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:59.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.609 issued rwts: total=3590,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.609 job1: (groupid=0, jobs=1): err= 0: pid=4142559: Tue Dec 10 03:56:58 2024 00:09:59.609 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:09:59.609 slat (nsec): min=1463, max=5421.0k, avg=83337.42, stdev=460206.39 00:09:59.609 clat (usec): min=6293, max=17419, avg=10451.55, stdev=1618.40 00:09:59.609 lat (usec): min=6298, max=17430, avg=10534.88, stdev=1654.54 00:09:59.609 clat percentiles (usec): 00:09:59.609 | 1.00th=[ 6783], 5.00th=[ 7570], 10.00th=[ 8455], 20.00th=[ 9634], 00:09:59.609 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:09:59.609 | 70.00th=[11207], 80.00th=[11994], 90.00th=[12518], 95.00th=[13173], 00:09:59.609 | 99.00th=[14877], 99.50th=[15401], 99.90th=[16188], 99.95th=[16188], 00:09:59.609 | 99.99th=[17433] 00:09:59.609 write: IOPS=6080, BW=23.8MiB/s (24.9MB/s)(23.9MiB/1005msec); 0 zone resets 00:09:59.609 slat (usec): min=2, max=23003, avg=81.60, stdev=466.43 00:09:59.609 clat (usec): min=4470, max=33973, avg=10692.77, stdev=2569.75 00:09:59.609 lat (usec): min=4975, max=34013, avg=10774.37, stdev=2610.04 00:09:59.609 clat percentiles (usec): 00:09:59.609 | 1.00th=[ 6390], 5.00th=[ 8225], 10.00th=[ 9241], 20.00th=[ 9765], 00:09:59.609 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10028], 60.00th=[10290], 00:09:59.609 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11863], 95.00th=[13304], 00:09:59.609 | 99.00th=[25822], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:09:59.609 | 99.99th=[33817] 00:09:59.609 bw ( KiB/s): min=21304, max=26568, per=30.71%, avg=23936.00, stdev=3722.21, samples=2 00:09:59.609 iops : min= 5326, max= 6642, avg=5984.00, stdev=930.55, samples=2 00:09:59.609 lat (msec) : 10=44.52%, 20=54.39%, 50=1.09% 00:09:59.609 cpu : usr=4.18%, sys=6.57%, ctx=763, majf=0, minf=1 00:09:59.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:59.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.610 issued rwts: total=5632,6111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.610 job2: (groupid=0, jobs=1): err= 0: pid=4142560: Tue Dec 10 03:56:58 2024 00:09:59.610 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:09:59.610 slat (nsec): min=1498, max=16365k, avg=129876.45, stdev=838712.07 00:09:59.610 clat (usec): min=7039, max=56649, avg=15404.51, stdev=6460.10 00:09:59.610 lat (usec): min=7046, max=56660, avg=15534.38, stdev=6538.57 00:09:59.610 clat percentiles (usec): 00:09:59.610 | 1.00th=[ 8225], 5.00th=[10159], 10.00th=[11207], 20.00th=[11994], 00:09:59.610 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13435], 60.00th=[14615], 00:09:59.610 | 70.00th=[15139], 80.00th=[15795], 90.00th=[24249], 95.00th=[30016], 00:09:59.610 | 99.00th=[51643], 99.50th=[54264], 99.90th=[56886], 99.95th=[56886], 00:09:59.610 | 99.99th=[56886] 00:09:59.610 write: IOPS=3933, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1005msec); 0 zone resets 00:09:59.610 slat (usec): min=2, max=9622, avg=130.35, stdev=572.09 00:09:59.610 clat (usec): min=1792, max=62210, avg=18212.94, stdev=10950.84 00:09:59.610 lat (usec): min=6607, max=62218, avg=18343.29, stdev=11009.90 00:09:59.610 clat percentiles (usec): 00:09:59.610 | 1.00th=[ 7832], 5.00th=[10421], 10.00th=[10945], 20.00th=[11207], 00:09:59.610 | 30.00th=[11994], 40.00th=[12911], 50.00th=[13435], 60.00th=[15401], 00:09:59.610 | 70.00th=[21103], 80.00th=[21365], 90.00th=[29492], 95.00th=[47449], 00:09:59.610 | 99.00th=[56886], 99.50th=[56886], 99.90th=[62129], 99.95th=[62129], 00:09:59.610 | 99.99th=[62129] 00:09:59.610 bw ( KiB/s): min=11280, max=19320, per=19.63%, avg=15300.00, stdev=5685.14, samples=2 00:09:59.610 iops : min= 2820, max= 4830, avg=3825.00, stdev=1421.28, samples=2 00:09:59.610 lat (msec) : 2=0.01%, 10=4.37%, 20=72.96%, 50=19.94%, 100=2.72% 00:09:59.610 cpu : usr=3.49%, sys=3.69%, ctx=494, majf=0, minf=1 00:09:59.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:59.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.610 issued rwts: total=3584,3953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.610 job3: (groupid=0, jobs=1): err= 0: pid=4142561: Tue Dec 10 03:56:58 2024 00:09:59.610 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:09:59.610 slat (nsec): min=1568, max=11560k, avg=102333.49, stdev=743650.35 00:09:59.610 clat (usec): min=4366, max=26284, avg=12745.56, stdev=3222.04 00:09:59.610 lat (usec): min=4377, max=29020, avg=12847.89, stdev=3289.33 00:09:59.610 clat percentiles (usec): 00:09:59.610 | 1.00th=[ 5342], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:09:59.610 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[12518], 00:09:59.610 | 70.00th=[13173], 80.00th=[14353], 90.00th=[17695], 95.00th=[20055], 00:09:59.610 | 99.00th=[22938], 99.50th=[26084], 99.90th=[26346], 99.95th=[26346], 00:09:59.610 | 99.99th=[26346] 00:09:59.610 write: IOPS=5487, BW=21.4MiB/s (22.5MB/s)(21.6MiB/1010msec); 0 zone resets 00:09:59.610 slat (usec): min=2, max=8867, avg=79.92, stdev=444.13 00:09:59.610 clat (usec): min=1527, max=34627, avg=11324.09, stdev=3833.47 00:09:59.610 lat (usec): min=1541, max=34653, avg=11404.01, stdev=3868.81 00:09:59.610 clat percentiles (usec): 00:09:59.610 | 1.00th=[ 3326], 5.00th=[ 5800], 10.00th=[ 7832], 20.00th=[ 9634], 00:09:59.610 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:09:59.610 | 70.00th=[11600], 80.00th=[12911], 90.00th=[13304], 95.00th=[14222], 00:09:59.610 | 99.00th=[30802], 99.50th=[32900], 99.90th=[33817], 99.95th=[34866], 00:09:59.610 | 99.99th=[34866] 00:09:59.610 bw ( KiB/s): min=20480, max=22840, per=27.79%, avg=21660.00, stdev=1668.77, samples=2 00:09:59.610 iops : min= 5120, max= 5710, avg=5415.00, stdev=417.19, samples=2 00:09:59.610 lat (msec) : 2=0.18%, 4=0.92%, 10=12.86%, 20=82.16%, 50=3.88% 00:09:59.610 cpu : usr=3.47%, sys=7.33%, ctx=590, majf=0, minf=2 00:09:59.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:59.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.610 issued rwts: total=5120,5542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.610 00:09:59.610 Run status group 0 (all jobs): 00:09:59.610 READ: bw=69.3MiB/s (72.6MB/s), 13.9MiB/s-21.9MiB/s (14.5MB/s-23.0MB/s), io=70.0MiB (73.4MB), run=1005-1011msec 00:09:59.610 WRITE: bw=76.1MiB/s (79.8MB/s), 15.4MiB/s-23.8MiB/s (16.1MB/s-24.9MB/s), io=77.0MiB (80.7MB), run=1005-1011msec 00:09:59.610 00:09:59.610 Disk stats (read/write): 00:09:59.610 nvme0n1: ios=3247/3584, merge=0/0, ticks=42711/57712, in_queue=100423, util=90.88% 00:09:59.610 nvme0n2: ios=4721/5120, merge=0/0, ticks=25233/25897, in_queue=51130, util=98.37% 00:09:59.610 nvme0n3: ios=3072/3399, merge=0/0, ticks=24430/27584, in_queue=52014, util=89.06% 00:09:59.610 nvme0n4: ios=4247/4608, merge=0/0, ticks=53889/51518, in_queue=105407, util=98.43% 00:09:59.610 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:59.610 [global] 00:09:59.610 thread=1 00:09:59.610 invalidate=1 00:09:59.610 rw=randwrite 00:09:59.610 time_based=1 00:09:59.610 runtime=1 00:09:59.610 ioengine=libaio 00:09:59.610 direct=1 00:09:59.610 bs=4096 00:09:59.610 iodepth=128 00:09:59.610 norandommap=0 00:09:59.610 numjobs=1 00:09:59.610 00:09:59.610 verify_dump=1 00:09:59.610 verify_backlog=512 00:09:59.610 verify_state_save=0 00:09:59.610 do_verify=1 00:09:59.610 verify=crc32c-intel 00:09:59.610 [job0] 00:09:59.610 filename=/dev/nvme0n1 00:09:59.610 [job1] 00:09:59.610 filename=/dev/nvme0n2 00:09:59.610 [job2] 00:09:59.610 filename=/dev/nvme0n3 00:09:59.610 [job3] 00:09:59.610 filename=/dev/nvme0n4 00:09:59.610 Could not set queue depth (nvme0n1) 00:09:59.610 Could not set queue depth (nvme0n2) 00:09:59.610 Could not set queue depth (nvme0n3) 00:09:59.610 Could not set queue depth (nvme0n4) 00:09:59.610 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.610 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.610 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.610 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.610 fio-3.35 00:09:59.610 Starting 4 threads 00:10:00.992 00:10:00.992 job0: (groupid=0, jobs=1): err= 0: pid=4142947: Tue Dec 10 03:57:00 2024 00:10:00.992 read: IOPS=3762, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1008msec) 00:10:00.992 slat (nsec): min=1210, max=14440k, avg=96769.81, stdev=741855.26 00:10:00.992 clat (usec): min=4137, max=65084, avg=15933.67, stdev=6646.30 00:10:00.992 lat (usec): min=5350, max=65088, avg=16030.44, stdev=6707.03 00:10:00.992 clat percentiles (usec): 00:10:00.992 | 1.00th=[ 7767], 5.00th=[10028], 10.00th=[10159], 20.00th=[12125], 00:10:00.992 | 30.00th=[13566], 40.00th=[14877], 50.00th=[15401], 60.00th=[15926], 00:10:00.992 | 70.00th=[16319], 80.00th=[17433], 90.00th=[19268], 95.00th=[25035], 00:10:00.992 | 99.00th=[50594], 99.50th=[61080], 99.90th=[65274], 99.95th=[65274], 00:10:00.992 | 99.99th=[65274] 00:10:00.992 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:10:00.992 slat (usec): min=2, max=12135, avg=108.71, stdev=644.02 00:10:00.992 clat (usec): min=1260, max=56252, avg=16499.65, stdev=6882.86 00:10:00.992 lat (usec): min=1302, max=56256, avg=16608.35, stdev=6934.73 00:10:00.992 clat percentiles (usec): 00:10:00.992 | 1.00th=[ 4080], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10945], 00:10:00.992 | 30.00th=[11994], 40.00th=[13173], 50.00th=[14615], 60.00th=[20055], 00:10:00.992 | 70.00th=[20841], 80.00th=[21365], 90.00th=[21890], 95.00th=[25822], 00:10:00.992 | 99.00th=[45351], 99.50th=[49546], 99.90th=[52167], 99.95th=[52167], 00:10:00.992 | 99.99th=[56361] 00:10:00.992 bw ( KiB/s): min=16184, max=16584, per=22.66%, avg=16384.00, stdev=282.84, samples=2 00:10:00.992 iops : min= 4046, max= 4146, avg=4096.00, stdev=70.71, samples=2 00:10:00.992 lat (msec) : 2=0.29%, 4=0.15%, 10=10.29%, 20=64.66%, 50=23.87% 00:10:00.992 lat (msec) : 100=0.74% 00:10:00.992 cpu : usr=2.18%, sys=4.87%, ctx=362, majf=0, minf=2 00:10:00.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:00.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.992 issued rwts: total=3793,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.992 job1: (groupid=0, jobs=1): err= 0: pid=4142966: Tue Dec 10 03:57:00 2024 00:10:00.992 read: IOPS=5982, BW=23.4MiB/s (24.5MB/s)(24.5MiB/1048msec) 00:10:00.993 slat (nsec): min=1314, max=9160.2k, avg=87887.12, stdev=620467.14 00:10:00.993 clat (usec): min=3428, max=60428, avg=11523.20, stdev=6775.76 00:10:00.993 lat (usec): min=3435, max=60431, avg=11611.09, stdev=6795.56 00:10:00.993 clat percentiles (usec): 00:10:00.993 | 1.00th=[ 4621], 5.00th=[ 7898], 10.00th=[ 8356], 20.00th=[ 9503], 00:10:00.993 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:10:00.993 | 70.00th=[10421], 80.00th=[12911], 90.00th=[15139], 95.00th=[16909], 00:10:00.993 | 99.00th=[55313], 99.50th=[57934], 99.90th=[60031], 99.95th=[60556], 00:10:00.993 | 99.99th=[60556] 00:10:00.993 write: IOPS=6351, BW=24.8MiB/s (26.0MB/s)(26.0MiB/1048msec); 0 zone resets 00:10:00.993 slat (usec): min=2, max=7854, avg=63.04, stdev=282.13 00:10:00.993 clat (usec): min=1586, max=60432, avg=9108.24, stdev=2089.74 00:10:00.993 lat (usec): min=1599, max=60435, avg=9171.29, stdev=2106.86 00:10:00.993 clat percentiles (usec): 00:10:00.993 | 1.00th=[ 3261], 5.00th=[ 4686], 10.00th=[ 6063], 20.00th=[ 7832], 00:10:00.993 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:10:00.993 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10290], 95.00th=[10814], 00:10:00.993 | 99.00th=[12911], 99.50th=[13173], 99.90th=[18220], 99.95th=[18220], 00:10:00.993 | 99.99th=[60556] 00:10:00.993 bw ( KiB/s): min=26416, max=26816, per=36.81%, avg=26616.00, stdev=282.84, samples=2 00:10:00.993 iops : min= 6604, max= 6704, avg=6654.00, stdev=70.71, samples=2 00:10:00.993 lat (msec) : 2=0.02%, 4=2.09%, 10=56.75%, 20=40.16%, 50=0.01% 00:10:00.993 lat (msec) : 100=0.97% 00:10:00.993 cpu : usr=4.49%, sys=6.59%, ctx=810, majf=0, minf=1 00:10:00.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:00.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.993 issued rwts: total=6270,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.993 job2: (groupid=0, jobs=1): err= 0: pid=4142984: Tue Dec 10 03:57:00 2024 00:10:00.993 read: IOPS=4647, BW=18.2MiB/s (19.0MB/s)(18.2MiB/1005msec) 00:10:00.993 slat (nsec): min=1673, max=14155k, avg=102912.49, stdev=614170.10 00:10:00.993 clat (usec): min=731, max=40907, avg=12494.08, stdev=4772.12 00:10:00.993 lat (usec): min=6109, max=40917, avg=12596.99, stdev=4824.75 00:10:00.993 clat percentiles (usec): 00:10:00.993 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10552], 00:10:00.993 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:10:00.993 | 70.00th=[11994], 80.00th=[13304], 90.00th=[15139], 95.00th=[22414], 00:10:00.993 | 99.00th=[35390], 99.50th=[36439], 99.90th=[40109], 99.95th=[41157], 00:10:00.993 | 99.99th=[41157] 00:10:00.993 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:10:00.993 slat (usec): min=2, max=21724, avg=96.05, stdev=557.82 00:10:00.993 clat (usec): min=6566, max=53051, avg=13439.50, stdev=5586.35 00:10:00.993 lat (usec): min=6580, max=53075, avg=13535.54, stdev=5633.88 00:10:00.993 clat percentiles (usec): 00:10:00.993 | 1.00th=[ 7439], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[10945], 00:10:00.993 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:10:00.993 | 70.00th=[11731], 80.00th=[13698], 90.00th=[20841], 95.00th=[28181], 00:10:00.993 | 99.00th=[39060], 99.50th=[39060], 99.90th=[40633], 99.95th=[43254], 00:10:00.993 | 99.99th=[53216] 00:10:00.993 bw ( KiB/s): min=16384, max=24056, per=27.96%, avg=20220.00, stdev=5424.92, samples=2 00:10:00.993 iops : min= 4096, max= 6014, avg=5055.00, stdev=1356.23, samples=2 00:10:00.993 lat (usec) : 750=0.01% 00:10:00.993 lat (msec) : 10=9.09%, 20=81.21%, 50=9.68%, 100=0.01% 00:10:00.993 cpu : usr=4.18%, sys=6.57%, ctx=562, majf=0, minf=1 00:10:00.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:00.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.993 issued rwts: total=4671,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.993 job3: (groupid=0, jobs=1): err= 0: pid=4142991: Tue Dec 10 03:57:00 2024 00:10:00.993 read: IOPS=2888, BW=11.3MiB/s (11.8MB/s)(11.4MiB/1009msec) 00:10:00.993 slat (nsec): min=1593, max=26965k, avg=168551.45, stdev=1139076.45 00:10:00.993 clat (usec): min=2779, max=67851, avg=20233.74, stdev=13379.08 00:10:00.993 lat (usec): min=6555, max=67878, avg=20402.29, stdev=13479.63 00:10:00.993 clat percentiles (usec): 00:10:00.993 | 1.00th=[ 7767], 5.00th=[ 9110], 10.00th=[10683], 20.00th=[11207], 00:10:00.993 | 30.00th=[11600], 40.00th=[12780], 50.00th=[13960], 60.00th=[15401], 00:10:00.993 | 70.00th=[20841], 80.00th=[28967], 90.00th=[43779], 95.00th=[51643], 00:10:00.993 | 99.00th=[60556], 99.50th=[61080], 99.90th=[61080], 99.95th=[62129], 00:10:00.993 | 99.99th=[67634] 00:10:00.993 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:10:00.993 slat (usec): min=2, max=22881, avg=161.06, stdev=1035.55 00:10:00.993 clat (usec): min=6963, max=61999, avg=22315.15, stdev=10416.92 00:10:00.993 lat (usec): min=6973, max=62031, avg=22476.21, stdev=10497.00 00:10:00.993 clat percentiles (usec): 00:10:00.993 | 1.00th=[ 8979], 5.00th=[10814], 10.00th=[11207], 20.00th=[11731], 00:10:00.993 | 30.00th=[14484], 40.00th=[20579], 50.00th=[21103], 60.00th=[21627], 00:10:00.993 | 70.00th=[23462], 80.00th=[30278], 90.00th=[37487], 95.00th=[43254], 00:10:00.993 | 99.00th=[51119], 99.50th=[53216], 99.90th=[55313], 99.95th=[58459], 00:10:00.993 | 99.99th=[62129] 00:10:00.993 bw ( KiB/s): min= 8192, max=16384, per=16.99%, avg=12288.00, stdev=5792.62, samples=2 00:10:00.993 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:10:00.993 lat (msec) : 4=0.02%, 10=5.38%, 20=47.05%, 50=44.21%, 100=3.34% 00:10:00.993 cpu : usr=2.68%, sys=3.77%, ctx=361, majf=0, minf=2 00:10:00.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:00.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.993 issued rwts: total=2915,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.993 00:10:00.993 Run status group 0 (all jobs): 00:10:00.993 READ: bw=65.8MiB/s (69.0MB/s), 11.3MiB/s-23.4MiB/s (11.8MB/s-24.5MB/s), io=68.9MiB (72.3MB), run=1005-1048msec 00:10:00.993 WRITE: bw=70.6MiB/s (74.0MB/s), 11.9MiB/s-24.8MiB/s (12.5MB/s-26.0MB/s), io=74.0MiB (77.6MB), run=1005-1048msec 00:10:00.993 00:10:00.993 Disk stats (read/write): 00:10:00.993 nvme0n1: ios=3123/3359, merge=0/0, ticks=48552/55133, in_queue=103685, util=97.19% 00:10:00.993 nvme0n2: ios=5155/5632, merge=0/0, ticks=53346/50306, in_queue=103652, util=97.34% 00:10:00.993 nvme0n3: ios=3896/4096, merge=0/0, ticks=24864/26513, in_queue=51377, util=98.53% 00:10:00.993 nvme0n4: ios=2578/2823, merge=0/0, ticks=23989/28921, in_queue=52910, util=97.14% 00:10:00.993 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:00.993 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4143178 00:10:00.993 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:00.993 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:00.993 [global] 00:10:00.993 thread=1 00:10:00.993 invalidate=1 00:10:00.993 rw=read 00:10:00.993 time_based=1 00:10:00.993 runtime=10 00:10:00.993 ioengine=libaio 00:10:00.993 direct=1 00:10:00.993 bs=4096 00:10:00.993 iodepth=1 00:10:00.993 norandommap=1 00:10:00.993 numjobs=1 00:10:00.993 00:10:00.993 [job0] 00:10:00.993 filename=/dev/nvme0n1 00:10:00.993 [job1] 00:10:00.993 filename=/dev/nvme0n2 00:10:00.993 [job2] 00:10:00.993 filename=/dev/nvme0n3 00:10:00.993 [job3] 00:10:00.993 filename=/dev/nvme0n4 00:10:00.993 Could not set queue depth (nvme0n1) 00:10:00.993 Could not set queue depth (nvme0n2) 00:10:00.993 Could not set queue depth (nvme0n3) 00:10:00.993 Could not set queue depth (nvme0n4) 00:10:01.251 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.251 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.251 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.251 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.251 fio-3.35 00:10:01.251 Starting 4 threads 00:10:04.522 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:04.522 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2121728, buflen=4096 00:10:04.522 fio: pid=4143548, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:04.522 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:04.522 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:04.522 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:04.522 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1236992, buflen=4096 00:10:04.522 fio: pid=4143537, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:04.522 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:04.522 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:04.522 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=49152000, buflen=4096 00:10:04.522 fio: pid=4143478, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:04.779 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=13570048, buflen=4096 00:10:04.779 fio: pid=4143502, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:04.779 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:04.779 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:04.779 00:10:04.779 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4143478: Tue Dec 10 03:57:03 2024 00:10:04.779 read: IOPS=3841, BW=15.0MiB/s (15.7MB/s)(46.9MiB/3124msec) 00:10:04.779 slat (usec): min=6, max=15913, avg= 9.74, stdev=179.87 00:10:04.779 clat (usec): min=150, max=42033, avg=247.59, stdev=1552.93 00:10:04.779 lat (usec): min=157, max=42043, avg=257.33, stdev=1564.00 00:10:04.779 clat percentiles (usec): 00:10:04.779 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:10:04.779 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:10:04.779 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 233], 00:10:04.779 | 99.00th=[ 281], 99.50th=[ 306], 99.90th=[41157], 99.95th=[41157], 00:10:04.779 | 99.99th=[42206] 00:10:04.779 bw ( KiB/s): min= 104, max=20984, per=80.57%, avg=15755.00, stdev=8122.44, samples=6 00:10:04.779 iops : min= 26, max= 5246, avg=3938.67, stdev=2030.63, samples=6 00:10:04.779 lat (usec) : 250=97.79%, 500=2.03%, 750=0.01% 00:10:04.779 lat (msec) : 4=0.01%, 20=0.01%, 50=0.14% 00:10:04.779 cpu : usr=0.96%, sys=3.59%, ctx=12003, majf=0, minf=2 00:10:04.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.779 issued rwts: total=12001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.779 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4143502: Tue Dec 10 03:57:03 2024 00:10:04.779 read: IOPS=1004, BW=4016KiB/s (4112kB/s)(12.9MiB/3300msec) 00:10:04.779 slat (usec): min=5, max=15781, avg=16.43, stdev=360.60 00:10:04.779 clat (usec): min=155, max=42278, avg=969.03, stdev=5545.24 00:10:04.779 lat (usec): min=162, max=42285, avg=981.39, stdev=5552.83 00:10:04.779 clat percentiles (usec): 00:10:04.779 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:10:04.779 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:10:04.779 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 251], 95.00th=[ 293], 00:10:04.779 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:04.779 | 99.99th=[42206] 00:10:04.779 bw ( KiB/s): min= 96, max=19920, per=22.40%, avg=4380.50, stdev=7817.70, samples=6 00:10:04.779 iops : min= 24, max= 4980, avg=1095.00, stdev=1954.48, samples=6 00:10:04.779 lat (usec) : 250=89.74%, 500=8.36% 00:10:04.779 lat (msec) : 50=1.87% 00:10:04.779 cpu : usr=0.24%, sys=0.94%, ctx=3316, majf=0, minf=2 00:10:04.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.779 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.779 issued rwts: total=3314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.779 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4143537: Tue Dec 10 03:57:03 2024 00:10:04.779 read: IOPS=103, BW=413KiB/s (423kB/s)(1208KiB/2924msec) 00:10:04.779 slat (usec): min=7, max=6848, avg=34.99, stdev=392.78 00:10:04.779 clat (usec): min=196, max=42155, avg=9574.72, stdev=17172.85 00:10:04.779 lat (usec): min=206, max=49004, avg=9609.73, stdev=17225.80 00:10:04.779 clat percentiles (usec): 00:10:04.779 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 227], 00:10:04.779 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 262], 00:10:04.779 | 70.00th=[ 277], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:04.779 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:04.779 | 99.99th=[42206] 00:10:04.779 bw ( KiB/s): min= 96, max= 1936, per=2.39%, avg=467.20, stdev=821.09, samples=5 00:10:04.779 iops : min= 24, max= 484, avg=116.80, stdev=205.27, samples=5 00:10:04.779 lat (usec) : 250=51.49%, 500=25.41% 00:10:04.779 lat (msec) : 50=22.77% 00:10:04.779 cpu : usr=0.10%, sys=0.07%, ctx=304, majf=0, minf=2 00:10:04.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.779 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.779 issued rwts: total=303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.779 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4143548: Tue Dec 10 03:57:03 2024 00:10:04.779 read: IOPS=193, BW=773KiB/s (792kB/s)(2072KiB/2680msec) 00:10:04.779 slat (nsec): min=3628, max=39169, avg=9735.96, stdev=4799.99 00:10:04.779 clat (usec): min=168, max=42093, avg=5144.43, stdev=13261.17 00:10:04.779 lat (usec): min=175, max=42116, avg=5154.16, stdev=13264.96 00:10:04.779 clat percentiles (usec): 00:10:04.779 | 1.00th=[ 192], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 229], 00:10:04.779 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 262], 00:10:04.779 | 70.00th=[ 285], 80.00th=[ 338], 90.00th=[41157], 95.00th=[41157], 00:10:04.779 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:04.779 | 99.99th=[42206] 00:10:04.779 bw ( KiB/s): min= 96, max= 1408, per=2.46%, avg=481.60, stdev=580.44, samples=5 00:10:04.779 iops : min= 24, max= 352, avg=120.40, stdev=145.11, samples=5 00:10:04.779 lat (usec) : 250=51.45%, 500=36.22%, 750=0.19% 00:10:04.779 lat (msec) : 50=11.95% 00:10:04.779 cpu : usr=0.07%, sys=0.22%, ctx=519, majf=0, minf=1 00:10:04.779 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.779 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.779 issued rwts: total=519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.779 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.779 00:10:04.779 Run status group 0 (all jobs): 00:10:04.779 READ: bw=19.1MiB/s (20.0MB/s), 413KiB/s-15.0MiB/s (423kB/s-15.7MB/s), io=63.0MiB (66.1MB), run=2680-3300msec 00:10:04.779 00:10:04.779 Disk stats (read/write): 00:10:04.779 nvme0n1: ios=11998/0, merge=0/0, ticks=2827/0, in_queue=2827, util=93.37% 00:10:04.779 nvme0n2: ios=3303/0, merge=0/0, ticks=2992/0, in_queue=2992, util=94.47% 00:10:04.779 nvme0n3: ios=299/0, merge=0/0, ticks=2763/0, in_queue=2763, util=95.96% 00:10:04.779 nvme0n4: ios=302/0, merge=0/0, ticks=2529/0, in_queue=2529, util=96.42% 00:10:05.036 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.036 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:05.293 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.294 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:05.294 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.294 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:05.550 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.550 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:05.807 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:05.807 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 4143178 00:10:05.807 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:05.807 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.807 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.807 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:05.807 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:05.807 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:06.062 nvmf hotplug test: fio failed as expected 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.062 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.062 rmmod nvme_tcp 00:10:06.062 rmmod nvme_fabrics 00:10:06.319 rmmod nvme_keyring 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4140499 ']' 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4140499 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 4140499 ']' 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 4140499 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4140499 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4140499' 00:10:06.319 killing process with pid 4140499 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 4140499 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 4140499 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.319 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:06.578 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:06.578 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.578 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.578 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.578 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.578 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.578 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.578 03:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.482 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.482 00:10:08.482 real 0m26.947s 00:10:08.482 user 1m47.519s 00:10:08.482 sys 0m8.419s 00:10:08.482 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.482 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.482 ************************************ 00:10:08.482 END TEST nvmf_fio_target 00:10:08.482 ************************************ 00:10:08.482 03:57:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:08.482 03:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.482 03:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.482 03:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.482 ************************************ 00:10:08.482 START TEST nvmf_bdevio 00:10:08.482 ************************************ 00:10:08.482 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:08.740 * Looking for test storage... 00:10:08.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:08.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.740 --rc genhtml_branch_coverage=1 00:10:08.740 --rc genhtml_function_coverage=1 00:10:08.740 --rc genhtml_legend=1 00:10:08.740 --rc geninfo_all_blocks=1 00:10:08.740 --rc geninfo_unexecuted_blocks=1 00:10:08.740 00:10:08.740 ' 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:08.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.740 --rc genhtml_branch_coverage=1 00:10:08.740 --rc genhtml_function_coverage=1 00:10:08.740 --rc genhtml_legend=1 00:10:08.740 --rc geninfo_all_blocks=1 00:10:08.740 --rc geninfo_unexecuted_blocks=1 00:10:08.740 00:10:08.740 ' 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:08.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.740 --rc genhtml_branch_coverage=1 00:10:08.740 --rc genhtml_function_coverage=1 00:10:08.740 --rc genhtml_legend=1 00:10:08.740 --rc geninfo_all_blocks=1 00:10:08.740 --rc geninfo_unexecuted_blocks=1 00:10:08.740 00:10:08.740 ' 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:08.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.740 --rc genhtml_branch_coverage=1 00:10:08.740 --rc genhtml_function_coverage=1 00:10:08.740 --rc genhtml_legend=1 00:10:08.740 --rc geninfo_all_blocks=1 00:10:08.740 --rc geninfo_unexecuted_blocks=1 00:10:08.740 00:10:08.740 ' 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.740 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.741 03:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.307 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:15.308 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:15.308 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:15.308 Found net devices under 0000:af:00.0: cvl_0_0 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:15.308 Found net devices under 0000:af:00.1: cvl_0_1 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:10:15.308 00:10:15.308 --- 10.0.0.2 ping statistics --- 00:10:15.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.308 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:10:15.308 00:10:15.308 --- 10.0.0.1 ping statistics --- 00:10:15.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.308 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=4148208 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 4148208 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 4148208 ']' 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.308 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.308 [2024-12-10 03:57:14.024274] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:10:15.308 [2024-12-10 03:57:14.024318] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.308 [2024-12-10 03:57:14.102285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.308 [2024-12-10 03:57:14.141533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.309 [2024-12-10 03:57:14.141572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.309 [2024-12-10 03:57:14.141579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.309 [2024-12-10 03:57:14.141585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.309 [2024-12-10 03:57:14.141591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.309 [2024-12-10 03:57:14.143130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:15.309 [2024-12-10 03:57:14.143248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:15.309 [2024-12-10 03:57:14.143357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.309 [2024-12-10 03:57:14.143357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.309 [2024-12-10 03:57:14.292339] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.309 Malloc0 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.309 [2024-12-10 03:57:14.358947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.309 { 00:10:15.309 "params": { 00:10:15.309 "name": "Nvme$subsystem", 00:10:15.309 "trtype": "$TEST_TRANSPORT", 00:10:15.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.309 "adrfam": "ipv4", 00:10:15.309 "trsvcid": "$NVMF_PORT", 00:10:15.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.309 "hdgst": ${hdgst:-false}, 00:10:15.309 "ddgst": ${ddgst:-false} 00:10:15.309 }, 00:10:15.309 "method": "bdev_nvme_attach_controller" 00:10:15.309 } 00:10:15.309 EOF 00:10:15.309 )") 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:15.309 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.309 "params": { 00:10:15.309 "name": "Nvme1", 00:10:15.309 "trtype": "tcp", 00:10:15.309 "traddr": "10.0.0.2", 00:10:15.309 "adrfam": "ipv4", 00:10:15.309 "trsvcid": "4420", 00:10:15.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.309 "hdgst": false, 00:10:15.309 "ddgst": false 00:10:15.309 }, 00:10:15.309 "method": "bdev_nvme_attach_controller" 00:10:15.309 }' 00:10:15.309 [2024-12-10 03:57:14.411609] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:10:15.309 [2024-12-10 03:57:14.411653] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148423 ] 00:10:15.309 [2024-12-10 03:57:14.485904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.309 [2024-12-10 03:57:14.528214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.309 [2024-12-10 03:57:14.528322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.309 [2024-12-10 03:57:14.528322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.567 I/O targets: 00:10:15.567 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:15.567 00:10:15.567 00:10:15.567 CUnit - A unit testing framework for C - Version 2.1-3 00:10:15.567 http://cunit.sourceforge.net/ 00:10:15.567 00:10:15.567 00:10:15.567 Suite: bdevio tests on: Nvme1n1 00:10:15.825 Test: blockdev write read block ...passed 00:10:15.825 Test: blockdev write zeroes read block ...passed 00:10:15.825 Test: blockdev write zeroes read no split ...passed 00:10:15.825 Test: blockdev write zeroes read split ...passed 00:10:15.825 Test: blockdev write zeroes read split partial ...passed 00:10:15.825 Test: blockdev reset ...[2024-12-10 03:57:14.922941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:15.825 [2024-12-10 03:57:14.923003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1833610 (9): Bad file descriptor 00:10:15.825 [2024-12-10 03:57:15.016905] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:15.825 passed 00:10:15.825 Test: blockdev write read 8 blocks ...passed 00:10:15.825 Test: blockdev write read size > 128k ...passed 00:10:15.825 Test: blockdev write read invalid size ...passed 00:10:15.825 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.825 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.825 Test: blockdev write read max offset ...passed 00:10:16.084 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:16.084 Test: blockdev writev readv 8 blocks ...passed 00:10:16.084 Test: blockdev writev readv 30 x 1block ...passed 00:10:16.084 Test: blockdev writev readv block ...passed 00:10:16.084 Test: blockdev writev readv size > 128k ...passed 00:10:16.084 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:16.084 Test: blockdev comparev and writev ...[2024-12-10 03:57:15.227820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.084 [2024-12-10 03:57:15.227851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:16.084 [2024-12-10 03:57:15.227865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.084 [2024-12-10 03:57:15.227872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:16.084 [2024-12-10 03:57:15.228120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.084 [2024-12-10 03:57:15.228130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:16.084 [2024-12-10 03:57:15.228141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.084 [2024-12-10 03:57:15.228148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:16.084 [2024-12-10 03:57:15.228399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.084 [2024-12-10 03:57:15.228408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:16.084 [2024-12-10 03:57:15.228419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.084 [2024-12-10 03:57:15.228426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:16.084 [2024-12-10 03:57:15.228649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.084 [2024-12-10 03:57:15.228658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:16.084 [2024-12-10 03:57:15.228669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.084 [2024-12-10 03:57:15.228676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:16.084 passed 00:10:16.084 Test: blockdev nvme passthru rw ...passed 00:10:16.084 Test: blockdev nvme passthru vendor specific ...[2024-12-10 03:57:15.311514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.084 [2024-12-10 03:57:15.311532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:16.084 [2024-12-10 03:57:15.311640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.084 [2024-12-10 03:57:15.311649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:16.084 [2024-12-10 03:57:15.311748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.084 [2024-12-10 03:57:15.311757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:16.084 [2024-12-10 03:57:15.311861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.084 [2024-12-10 03:57:15.311870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:16.084 passed 00:10:16.084 Test: blockdev nvme admin passthru ...passed 00:10:16.084 Test: blockdev copy ...passed 00:10:16.084 00:10:16.084 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.084 suites 1 1 n/a 0 0 00:10:16.084 tests 23 23 23 0 0 00:10:16.084 asserts 152 152 152 0 n/a 00:10:16.084 00:10:16.084 Elapsed time = 1.124 seconds 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.346 rmmod nvme_tcp 00:10:16.346 rmmod nvme_fabrics 00:10:16.346 rmmod nvme_keyring 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 4148208 ']' 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 4148208 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 4148208 ']' 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 4148208 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.346 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4148208 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4148208' 00:10:16.606 killing process with pid 4148208 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 4148208 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 4148208 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.606 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.143 03:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:19.143 00:10:19.143 real 0m10.134s 00:10:19.143 user 0m10.657s 00:10:19.143 sys 0m4.980s 00:10:19.143 03:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.143 03:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:19.143 ************************************ 00:10:19.143 END TEST nvmf_bdevio 00:10:19.143 ************************************ 00:10:19.143 03:57:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:19.143 00:10:19.143 real 4m36.429s 00:10:19.143 user 10m24.749s 00:10:19.143 sys 1m37.659s 00:10:19.143 03:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.143 03:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.143 ************************************ 00:10:19.143 END TEST nvmf_target_core 00:10:19.143 ************************************ 00:10:19.143 03:57:17 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:19.143 03:57:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.143 03:57:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.143 03:57:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.143 ************************************ 00:10:19.143 START TEST nvmf_target_extra 00:10:19.143 ************************************ 00:10:19.143 03:57:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:19.143 * Looking for test storage... 00:10:19.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.143 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:19.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.144 --rc genhtml_branch_coverage=1 00:10:19.144 --rc genhtml_function_coverage=1 00:10:19.144 --rc genhtml_legend=1 00:10:19.144 --rc geninfo_all_blocks=1 00:10:19.144 --rc geninfo_unexecuted_blocks=1 00:10:19.144 00:10:19.144 ' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:19.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.144 --rc genhtml_branch_coverage=1 00:10:19.144 --rc genhtml_function_coverage=1 00:10:19.144 --rc genhtml_legend=1 00:10:19.144 --rc geninfo_all_blocks=1 00:10:19.144 --rc geninfo_unexecuted_blocks=1 00:10:19.144 00:10:19.144 ' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:19.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.144 --rc genhtml_branch_coverage=1 00:10:19.144 --rc genhtml_function_coverage=1 00:10:19.144 --rc genhtml_legend=1 00:10:19.144 --rc geninfo_all_blocks=1 00:10:19.144 --rc geninfo_unexecuted_blocks=1 00:10:19.144 00:10:19.144 ' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:19.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.144 --rc genhtml_branch_coverage=1 00:10:19.144 --rc genhtml_function_coverage=1 00:10:19.144 --rc genhtml_legend=1 00:10:19.144 --rc geninfo_all_blocks=1 00:10:19.144 --rc geninfo_unexecuted_blocks=1 00:10:19.144 00:10:19.144 ' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:19.144 ************************************ 00:10:19.144 START TEST nvmf_example 00:10:19.144 ************************************ 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:19.144 * Looking for test storage... 00:10:19.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:19.144 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:19.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.145 --rc genhtml_branch_coverage=1 00:10:19.145 --rc genhtml_function_coverage=1 00:10:19.145 --rc genhtml_legend=1 00:10:19.145 --rc geninfo_all_blocks=1 00:10:19.145 --rc geninfo_unexecuted_blocks=1 00:10:19.145 00:10:19.145 ' 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:19.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.145 --rc genhtml_branch_coverage=1 00:10:19.145 --rc genhtml_function_coverage=1 00:10:19.145 --rc genhtml_legend=1 00:10:19.145 --rc geninfo_all_blocks=1 00:10:19.145 --rc geninfo_unexecuted_blocks=1 00:10:19.145 00:10:19.145 ' 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:19.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.145 --rc genhtml_branch_coverage=1 00:10:19.145 --rc genhtml_function_coverage=1 00:10:19.145 --rc genhtml_legend=1 00:10:19.145 --rc geninfo_all_blocks=1 00:10:19.145 --rc geninfo_unexecuted_blocks=1 00:10:19.145 00:10:19.145 ' 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:19.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.145 --rc genhtml_branch_coverage=1 00:10:19.145 --rc genhtml_function_coverage=1 00:10:19.145 --rc genhtml_legend=1 00:10:19.145 --rc geninfo_all_blocks=1 00:10:19.145 --rc geninfo_unexecuted_blocks=1 00:10:19.145 00:10:19.145 ' 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.145 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.404 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:19.404 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.404 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:19.404 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.404 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.404 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.404 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.404 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.404 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:19.405 03:57:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.984 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:25.985 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:25.985 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:25.985 Found net devices under 0000:af:00.0: cvl_0_0 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:25.985 Found net devices under 0000:af:00.1: cvl_0_1 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.985 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:25.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:10:25.986 00:10:25.986 --- 10.0.0.2 ping statistics --- 00:10:25.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.986 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:10:25.986 00:10:25.986 --- 10.0.0.1 ping statistics --- 00:10:25.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.986 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4152208 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4152208 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 4152208 ']' 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.986 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.987 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.987 03:57:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.987 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.987 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:25.987 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:25.987 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:25.987 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.987 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:25.987 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.987 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:26.246 03:57:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:38.443 Initializing NVMe Controllers 00:10:38.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:38.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:38.443 Initialization complete. Launching workers. 00:10:38.443 ======================================================== 00:10:38.443 Latency(us) 00:10:38.443 Device Information : IOPS MiB/s Average min max 00:10:38.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18201.31 71.10 3515.61 542.43 15476.69 00:10:38.443 ======================================================== 00:10:38.443 Total : 18201.31 71.10 3515.61 542.43 15476.69 00:10:38.443 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.443 rmmod nvme_tcp 00:10:38.443 rmmod nvme_fabrics 00:10:38.443 rmmod nvme_keyring 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 4152208 ']' 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 4152208 00:10:38.443 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 4152208 ']' 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 4152208 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4152208 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4152208' 00:10:38.444 killing process with pid 4152208 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 4152208 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 4152208 00:10:38.444 nvmf threads initialize successfully 00:10:38.444 bdev subsystem init successfully 00:10:38.444 created a nvmf target service 00:10:38.444 create targets's poll groups done 00:10:38.444 all subsystems of target started 00:10:38.444 nvmf target is running 00:10:38.444 all subsystems of target stopped 00:10:38.444 destroy targets's poll groups done 00:10:38.444 destroyed the nvmf target service 00:10:38.444 bdev subsystem finish successfully 00:10:38.444 nvmf threads destroy successfully 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.444 03:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.012 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:39.012 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:39.012 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.012 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.012 00:10:39.012 real 0m19.844s 00:10:39.012 user 0m46.527s 00:10:39.012 sys 0m5.948s 00:10:39.012 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.012 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.012 ************************************ 00:10:39.012 END TEST nvmf_example 00:10:39.012 ************************************ 00:10:39.012 03:57:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:39.012 03:57:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:39.013 03:57:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.013 03:57:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:39.013 ************************************ 00:10:39.013 START TEST nvmf_filesystem 00:10:39.013 ************************************ 00:10:39.013 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:39.013 * Looking for test storage... 00:10:39.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.013 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:39.013 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:39.013 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:39.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.276 --rc genhtml_branch_coverage=1 00:10:39.276 --rc genhtml_function_coverage=1 00:10:39.276 --rc genhtml_legend=1 00:10:39.276 --rc geninfo_all_blocks=1 00:10:39.276 --rc geninfo_unexecuted_blocks=1 00:10:39.276 00:10:39.276 ' 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:39.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.276 --rc genhtml_branch_coverage=1 00:10:39.276 --rc genhtml_function_coverage=1 00:10:39.276 --rc genhtml_legend=1 00:10:39.276 --rc geninfo_all_blocks=1 00:10:39.276 --rc geninfo_unexecuted_blocks=1 00:10:39.276 00:10:39.276 ' 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:39.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.276 --rc genhtml_branch_coverage=1 00:10:39.276 --rc genhtml_function_coverage=1 00:10:39.276 --rc genhtml_legend=1 00:10:39.276 --rc geninfo_all_blocks=1 00:10:39.276 --rc geninfo_unexecuted_blocks=1 00:10:39.276 00:10:39.276 ' 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:39.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.276 --rc genhtml_branch_coverage=1 00:10:39.276 --rc genhtml_function_coverage=1 00:10:39.276 --rc genhtml_legend=1 00:10:39.276 --rc geninfo_all_blocks=1 00:10:39.276 --rc geninfo_unexecuted_blocks=1 00:10:39.276 00:10:39.276 ' 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:39.276 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:39.277 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:39.277 #define SPDK_CONFIG_H 00:10:39.277 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:39.277 #define SPDK_CONFIG_APPS 1 00:10:39.277 #define SPDK_CONFIG_ARCH native 00:10:39.277 #undef SPDK_CONFIG_ASAN 00:10:39.277 #undef SPDK_CONFIG_AVAHI 00:10:39.277 #undef SPDK_CONFIG_CET 00:10:39.278 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:39.278 #define SPDK_CONFIG_COVERAGE 1 00:10:39.278 #define SPDK_CONFIG_CROSS_PREFIX 00:10:39.278 #undef SPDK_CONFIG_CRYPTO 00:10:39.278 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:39.278 #undef SPDK_CONFIG_CUSTOMOCF 00:10:39.278 #undef SPDK_CONFIG_DAOS 00:10:39.278 #define SPDK_CONFIG_DAOS_DIR 00:10:39.278 #define SPDK_CONFIG_DEBUG 1 00:10:39.278 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:39.278 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:39.278 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:39.278 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:39.278 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:39.278 #undef SPDK_CONFIG_DPDK_UADK 00:10:39.278 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:39.278 #define SPDK_CONFIG_EXAMPLES 1 00:10:39.278 #undef SPDK_CONFIG_FC 00:10:39.278 #define SPDK_CONFIG_FC_PATH 00:10:39.278 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:39.278 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:39.278 #define SPDK_CONFIG_FSDEV 1 00:10:39.278 #undef SPDK_CONFIG_FUSE 00:10:39.278 #undef SPDK_CONFIG_FUZZER 00:10:39.278 #define SPDK_CONFIG_FUZZER_LIB 00:10:39.278 #undef SPDK_CONFIG_GOLANG 00:10:39.278 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:39.278 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:39.278 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:39.278 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:39.278 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:39.278 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:39.278 #undef SPDK_CONFIG_HAVE_LZ4 00:10:39.278 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:39.278 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:39.278 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:39.278 #define SPDK_CONFIG_IDXD 1 00:10:39.278 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:39.278 #undef SPDK_CONFIG_IPSEC_MB 00:10:39.278 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:39.278 #define SPDK_CONFIG_ISAL 1 00:10:39.278 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:39.278 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:39.278 #define SPDK_CONFIG_LIBDIR 00:10:39.278 #undef SPDK_CONFIG_LTO 00:10:39.278 #define SPDK_CONFIG_MAX_LCORES 128 00:10:39.278 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:39.278 #define SPDK_CONFIG_NVME_CUSE 1 00:10:39.278 #undef SPDK_CONFIG_OCF 00:10:39.278 #define SPDK_CONFIG_OCF_PATH 00:10:39.278 #define SPDK_CONFIG_OPENSSL_PATH 00:10:39.278 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:39.278 #define SPDK_CONFIG_PGO_DIR 00:10:39.278 #undef SPDK_CONFIG_PGO_USE 00:10:39.278 #define SPDK_CONFIG_PREFIX /usr/local 00:10:39.278 #undef SPDK_CONFIG_RAID5F 00:10:39.278 #undef SPDK_CONFIG_RBD 00:10:39.278 #define SPDK_CONFIG_RDMA 1 00:10:39.278 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:39.278 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:39.278 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:39.278 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:39.278 #define SPDK_CONFIG_SHARED 1 00:10:39.278 #undef SPDK_CONFIG_SMA 00:10:39.278 #define SPDK_CONFIG_TESTS 1 00:10:39.278 #undef SPDK_CONFIG_TSAN 00:10:39.278 #define SPDK_CONFIG_UBLK 1 00:10:39.278 #define SPDK_CONFIG_UBSAN 1 00:10:39.278 #undef SPDK_CONFIG_UNIT_TESTS 00:10:39.278 #undef SPDK_CONFIG_URING 00:10:39.278 #define SPDK_CONFIG_URING_PATH 00:10:39.278 #undef SPDK_CONFIG_URING_ZNS 00:10:39.278 #undef SPDK_CONFIG_USDT 00:10:39.278 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:39.278 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:39.278 #define SPDK_CONFIG_VFIO_USER 1 00:10:39.278 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:39.278 #define SPDK_CONFIG_VHOST 1 00:10:39.278 #define SPDK_CONFIG_VIRTIO 1 00:10:39.278 #undef SPDK_CONFIG_VTUNE 00:10:39.278 #define SPDK_CONFIG_VTUNE_DIR 00:10:39.278 #define SPDK_CONFIG_WERROR 1 00:10:39.278 #define SPDK_CONFIG_WPDK_DIR 00:10:39.278 #undef SPDK_CONFIG_XNVME 00:10:39.278 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:39.278 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:39.279 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:39.280 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 4154557 ]] 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 4154557 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.wAl83R 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.wAl83R/tests/target /tmp/spdk.wAl83R 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=89076068352 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837203968 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11761135616 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50407235584 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144435200 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23007232 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=49344409600 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074192384 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:39.281 * Looking for test storage... 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=89076068352 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13975728128 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.281 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:39.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.282 --rc genhtml_branch_coverage=1 00:10:39.282 --rc genhtml_function_coverage=1 00:10:39.282 --rc genhtml_legend=1 00:10:39.282 --rc geninfo_all_blocks=1 00:10:39.282 --rc geninfo_unexecuted_blocks=1 00:10:39.282 00:10:39.282 ' 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:39.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.282 --rc genhtml_branch_coverage=1 00:10:39.282 --rc genhtml_function_coverage=1 00:10:39.282 --rc genhtml_legend=1 00:10:39.282 --rc geninfo_all_blocks=1 00:10:39.282 --rc geninfo_unexecuted_blocks=1 00:10:39.282 00:10:39.282 ' 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:39.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.282 --rc genhtml_branch_coverage=1 00:10:39.282 --rc genhtml_function_coverage=1 00:10:39.282 --rc genhtml_legend=1 00:10:39.282 --rc geninfo_all_blocks=1 00:10:39.282 --rc geninfo_unexecuted_blocks=1 00:10:39.282 00:10:39.282 ' 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:39.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.282 --rc genhtml_branch_coverage=1 00:10:39.282 --rc genhtml_function_coverage=1 00:10:39.282 --rc genhtml_legend=1 00:10:39.282 --rc geninfo_all_blocks=1 00:10:39.282 --rc geninfo_unexecuted_blocks=1 00:10:39.282 00:10:39.282 ' 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.282 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:39.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:39.542 03:57:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.116 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.116 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.116 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.116 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.116 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.116 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.116 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.116 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:46.117 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:46.117 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:46.117 Found net devices under 0000:af:00.0: cvl_0_0 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:46.117 Found net devices under 0000:af:00.1: cvl_0_1 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:10:46.117 00:10:46.117 --- 10.0.0.2 ping statistics --- 00:10:46.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.117 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:10:46.117 00:10:46.117 --- 10.0.0.1 ping statistics --- 00:10:46.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.117 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:46.117 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.118 ************************************ 00:10:46.118 START TEST nvmf_filesystem_no_in_capsule 00:10:46.118 ************************************ 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=4157600 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 4157600 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4157600 ']' 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.118 03:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.118 [2024-12-10 03:57:44.694343] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:10:46.118 [2024-12-10 03:57:44.694392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.118 [2024-12-10 03:57:44.775253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.118 [2024-12-10 03:57:44.815885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.118 [2024-12-10 03:57:44.815926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.118 [2024-12-10 03:57:44.815933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.118 [2024-12-10 03:57:44.815938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.118 [2024-12-10 03:57:44.815943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.118 [2024-12-10 03:57:44.817464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.118 [2024-12-10 03:57:44.817573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.118 [2024-12-10 03:57:44.817683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.118 [2024-12-10 03:57:44.817684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.376 [2024-12-10 03:57:45.565513] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.376 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.634 Malloc1 00:10:46.634 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.635 [2024-12-10 03:57:45.717330] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:46.635 { 00:10:46.635 "name": "Malloc1", 00:10:46.635 "aliases": [ 00:10:46.635 "27d8f8e7-1800-436a-9f6c-b27cb3bf7c6b" 00:10:46.635 ], 00:10:46.635 "product_name": "Malloc disk", 00:10:46.635 "block_size": 512, 00:10:46.635 "num_blocks": 1048576, 00:10:46.635 "uuid": "27d8f8e7-1800-436a-9f6c-b27cb3bf7c6b", 00:10:46.635 "assigned_rate_limits": { 00:10:46.635 "rw_ios_per_sec": 0, 00:10:46.635 "rw_mbytes_per_sec": 0, 00:10:46.635 "r_mbytes_per_sec": 0, 00:10:46.635 "w_mbytes_per_sec": 0 00:10:46.635 }, 00:10:46.635 "claimed": true, 00:10:46.635 "claim_type": "exclusive_write", 00:10:46.635 "zoned": false, 00:10:46.635 "supported_io_types": { 00:10:46.635 "read": true, 00:10:46.635 "write": true, 00:10:46.635 "unmap": true, 00:10:46.635 "flush": true, 00:10:46.635 "reset": true, 00:10:46.635 "nvme_admin": false, 00:10:46.635 "nvme_io": false, 00:10:46.635 "nvme_io_md": false, 00:10:46.635 "write_zeroes": true, 00:10:46.635 "zcopy": true, 00:10:46.635 "get_zone_info": false, 00:10:46.635 "zone_management": false, 00:10:46.635 "zone_append": false, 00:10:46.635 "compare": false, 00:10:46.635 "compare_and_write": false, 00:10:46.635 "abort": true, 00:10:46.635 "seek_hole": false, 00:10:46.635 "seek_data": false, 00:10:46.635 "copy": true, 00:10:46.635 "nvme_iov_md": false 00:10:46.635 }, 00:10:46.635 "memory_domains": [ 00:10:46.635 { 00:10:46.635 "dma_device_id": "system", 00:10:46.635 "dma_device_type": 1 00:10:46.635 }, 00:10:46.635 { 00:10:46.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.635 "dma_device_type": 2 00:10:46.635 } 00:10:46.635 ], 00:10:46.635 "driver_specific": {} 00:10:46.635 } 00:10:46.635 ]' 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:46.635 03:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.008 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.008 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:48.008 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.008 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:48.008 03:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:49.907 03:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:50.165 03:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:50.731 03:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.663 ************************************ 00:10:51.663 START TEST filesystem_ext4 00:10:51.663 ************************************ 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:51.663 03:57:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:51.663 mke2fs 1.47.0 (5-Feb-2023) 00:10:51.921 Discarding device blocks: 0/522240 done 00:10:51.921 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:51.921 Filesystem UUID: 23a60e24-654c-40c5-a0d7-1eee752ba1fb 00:10:51.921 Superblock backups stored on blocks: 00:10:51.921 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:51.921 00:10:51.921 Allocating group tables: 0/64 done 00:10:51.921 Writing inode tables: 0/64 done 00:10:51.921 Creating journal (8192 blocks): done 00:10:51.921 Writing superblocks and filesystem accounting information: 0/64 done 00:10:51.921 00:10:51.921 03:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:51.921 03:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4157600 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.183 00:10:57.183 real 0m5.491s 00:10:57.183 user 0m0.033s 00:10:57.183 sys 0m0.065s 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:57.183 ************************************ 00:10:57.183 END TEST filesystem_ext4 00:10:57.183 ************************************ 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.183 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.442 ************************************ 00:10:57.442 START TEST filesystem_btrfs 00:10:57.442 ************************************ 00:10:57.442 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:57.442 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:57.442 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.442 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:57.442 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:57.442 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:57.442 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:57.442 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:57.442 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:57.442 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:57.442 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:57.700 btrfs-progs v6.8.1 00:10:57.700 See https://btrfs.readthedocs.io for more information. 00:10:57.700 00:10:57.700 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:57.700 NOTE: several default settings have changed in version 5.15, please make sure 00:10:57.700 this does not affect your deployments: 00:10:57.700 - DUP for metadata (-m dup) 00:10:57.700 - enabled no-holes (-O no-holes) 00:10:57.700 - enabled free-space-tree (-R free-space-tree) 00:10:57.700 00:10:57.700 Label: (null) 00:10:57.700 UUID: eaf18362-6d8a-47fe-adaa-e5a882fb5cec 00:10:57.700 Node size: 16384 00:10:57.700 Sector size: 4096 (CPU page size: 4096) 00:10:57.700 Filesystem size: 510.00MiB 00:10:57.700 Block group profiles: 00:10:57.700 Data: single 8.00MiB 00:10:57.700 Metadata: DUP 32.00MiB 00:10:57.700 System: DUP 8.00MiB 00:10:57.700 SSD detected: yes 00:10:57.700 Zoned device: no 00:10:57.700 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:57.700 Checksum: crc32c 00:10:57.700 Number of devices: 1 00:10:57.700 Devices: 00:10:57.700 ID SIZE PATH 00:10:57.700 1 510.00MiB /dev/nvme0n1p1 00:10:57.700 00:10:57.700 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:57.700 03:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:58.635 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:58.635 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:58.635 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:58.635 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:58.635 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4157600 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:58.636 00:10:58.636 real 0m1.276s 00:10:58.636 user 0m0.024s 00:10:58.636 sys 0m0.116s 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:58.636 ************************************ 00:10:58.636 END TEST filesystem_btrfs 00:10:58.636 ************************************ 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.636 ************************************ 00:10:58.636 START TEST filesystem_xfs 00:10:58.636 ************************************ 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:58.636 03:57:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:58.636 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:58.636 = sectsz=512 attr=2, projid32bit=1 00:10:58.636 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:58.636 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:58.636 data = bsize=4096 blocks=130560, imaxpct=25 00:10:58.636 = sunit=0 swidth=0 blks 00:10:58.636 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:58.636 log =internal log bsize=4096 blocks=16384, version=2 00:10:58.636 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:58.636 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:59.571 Discarding blocks...Done. 00:10:59.571 03:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:59.571 03:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4157600 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.470 00:11:01.470 real 0m2.727s 00:11:01.470 user 0m0.022s 00:11:01.470 sys 0m0.073s 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:01.470 ************************************ 00:11:01.470 END TEST filesystem_xfs 00:11:01.470 ************************************ 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.470 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4157600 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4157600 ']' 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4157600 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4157600 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4157600' 00:11:01.729 killing process with pid 4157600 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 4157600 00:11:01.729 03:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 4157600 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:01.988 00:11:01.988 real 0m16.499s 00:11:01.988 user 1m5.049s 00:11:01.988 sys 0m1.378s 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.988 ************************************ 00:11:01.988 END TEST nvmf_filesystem_no_in_capsule 00:11:01.988 ************************************ 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.988 ************************************ 00:11:01.988 START TEST nvmf_filesystem_in_capsule 00:11:01.988 ************************************ 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=4160467 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 4160467 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4160467 ']' 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.988 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.988 [2024-12-10 03:58:01.258688] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:11:01.988 [2024-12-10 03:58:01.258731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.246 [2024-12-10 03:58:01.324769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.246 [2024-12-10 03:58:01.363438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.246 [2024-12-10 03:58:01.363477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.246 [2024-12-10 03:58:01.363484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.246 [2024-12-10 03:58:01.363490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.246 [2024-12-10 03:58:01.363495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.246 [2024-12-10 03:58:01.364961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.246 [2024-12-10 03:58:01.365071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.246 [2024-12-10 03:58:01.365206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.246 [2024-12-10 03:58:01.365211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.246 [2024-12-10 03:58:01.510143] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.246 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.504 Malloc1 00:11:02.504 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.504 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.504 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.504 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.504 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.504 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:02.504 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.504 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.504 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.504 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.504 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.505 [2024-12-10 03:58:01.671342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:02.505 { 00:11:02.505 "name": "Malloc1", 00:11:02.505 "aliases": [ 00:11:02.505 "788c4b12-ccda-414f-96e3-00eee845c190" 00:11:02.505 ], 00:11:02.505 "product_name": "Malloc disk", 00:11:02.505 "block_size": 512, 00:11:02.505 "num_blocks": 1048576, 00:11:02.505 "uuid": "788c4b12-ccda-414f-96e3-00eee845c190", 00:11:02.505 "assigned_rate_limits": { 00:11:02.505 "rw_ios_per_sec": 0, 00:11:02.505 "rw_mbytes_per_sec": 0, 00:11:02.505 "r_mbytes_per_sec": 0, 00:11:02.505 "w_mbytes_per_sec": 0 00:11:02.505 }, 00:11:02.505 "claimed": true, 00:11:02.505 "claim_type": "exclusive_write", 00:11:02.505 "zoned": false, 00:11:02.505 "supported_io_types": { 00:11:02.505 "read": true, 00:11:02.505 "write": true, 00:11:02.505 "unmap": true, 00:11:02.505 "flush": true, 00:11:02.505 "reset": true, 00:11:02.505 "nvme_admin": false, 00:11:02.505 "nvme_io": false, 00:11:02.505 "nvme_io_md": false, 00:11:02.505 "write_zeroes": true, 00:11:02.505 "zcopy": true, 00:11:02.505 "get_zone_info": false, 00:11:02.505 "zone_management": false, 00:11:02.505 "zone_append": false, 00:11:02.505 "compare": false, 00:11:02.505 "compare_and_write": false, 00:11:02.505 "abort": true, 00:11:02.505 "seek_hole": false, 00:11:02.505 "seek_data": false, 00:11:02.505 "copy": true, 00:11:02.505 "nvme_iov_md": false 00:11:02.505 }, 00:11:02.505 "memory_domains": [ 00:11:02.505 { 00:11:02.505 "dma_device_id": "system", 00:11:02.505 "dma_device_type": 1 00:11:02.505 }, 00:11:02.505 { 00:11:02.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.505 "dma_device_type": 2 00:11:02.505 } 00:11:02.505 ], 00:11:02.505 "driver_specific": {} 00:11:02.505 } 00:11:02.505 ]' 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:02.505 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:02.763 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:02.763 03:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.696 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.696 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:03.696 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.696 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:03.696 03:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:05.699 03:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:06.265 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:06.831 03:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.766 ************************************ 00:11:07.766 START TEST filesystem_in_capsule_ext4 00:11:07.766 ************************************ 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:07.766 03:58:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:07.766 mke2fs 1.47.0 (5-Feb-2023) 00:11:08.024 Discarding device blocks: 0/522240 done 00:11:08.024 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:08.024 Filesystem UUID: 0c415484-3bd0-4796-8743-ad7eebcecf7d 00:11:08.024 Superblock backups stored on blocks: 00:11:08.024 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:08.024 00:11:08.024 Allocating group tables: 0/64 done 00:11:08.024 Writing inode tables: 0/64 done 00:11:08.024 Creating journal (8192 blocks): done 00:11:08.024 Writing superblocks and filesystem accounting information: 0/64 done 00:11:08.024 00:11:08.024 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:08.024 03:58:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.290 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.290 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:13.290 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4160467 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.548 00:11:13.548 real 0m5.630s 00:11:13.548 user 0m0.031s 00:11:13.548 sys 0m0.063s 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:13.548 ************************************ 00:11:13.548 END TEST filesystem_in_capsule_ext4 00:11:13.548 ************************************ 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.548 ************************************ 00:11:13.548 START TEST filesystem_in_capsule_btrfs 00:11:13.548 ************************************ 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:13.548 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:13.549 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:13.549 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:13.549 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:13.549 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:13.549 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:13.549 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:13.549 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:13.549 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:13.549 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:13.807 btrfs-progs v6.8.1 00:11:13.807 See https://btrfs.readthedocs.io for more information. 00:11:13.807 00:11:13.807 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:13.807 NOTE: several default settings have changed in version 5.15, please make sure 00:11:13.807 this does not affect your deployments: 00:11:13.807 - DUP for metadata (-m dup) 00:11:13.807 - enabled no-holes (-O no-holes) 00:11:13.807 - enabled free-space-tree (-R free-space-tree) 00:11:13.807 00:11:13.807 Label: (null) 00:11:13.807 UUID: b44f29b3-cbf5-49a7-b3f5-be052e650fe7 00:11:13.807 Node size: 16384 00:11:13.807 Sector size: 4096 (CPU page size: 4096) 00:11:13.807 Filesystem size: 510.00MiB 00:11:13.807 Block group profiles: 00:11:13.807 Data: single 8.00MiB 00:11:13.807 Metadata: DUP 32.00MiB 00:11:13.807 System: DUP 8.00MiB 00:11:13.807 SSD detected: yes 00:11:13.807 Zoned device: no 00:11:13.807 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:13.807 Checksum: crc32c 00:11:13.807 Number of devices: 1 00:11:13.807 Devices: 00:11:13.807 ID SIZE PATH 00:11:13.807 1 510.00MiB /dev/nvme0n1p1 00:11:13.807 00:11:13.807 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:13.807 03:58:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4160467 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.065 00:11:14.065 real 0m0.465s 00:11:14.065 user 0m0.027s 00:11:14.065 sys 0m0.117s 00:11:14.065 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:14.066 ************************************ 00:11:14.066 END TEST filesystem_in_capsule_btrfs 00:11:14.066 ************************************ 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.066 ************************************ 00:11:14.066 START TEST filesystem_in_capsule_xfs 00:11:14.066 ************************************ 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:14.066 03:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:14.066 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:14.066 = sectsz=512 attr=2, projid32bit=1 00:11:14.066 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:14.066 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:14.066 data = bsize=4096 blocks=130560, imaxpct=25 00:11:14.066 = sunit=0 swidth=0 blks 00:11:14.066 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:14.066 log =internal log bsize=4096 blocks=16384, version=2 00:11:14.066 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:14.066 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:15.001 Discarding blocks...Done. 00:11:15.001 03:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:15.001 03:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.910 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.910 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:16.910 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.910 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:16.910 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:16.910 03:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.910 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4160467 00:11:16.910 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.910 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.910 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.910 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.910 00:11:16.910 real 0m2.794s 00:11:16.910 user 0m0.030s 00:11:16.910 sys 0m0.066s 00:11:16.910 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.910 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.910 ************************************ 00:11:16.910 END TEST filesystem_in_capsule_xfs 00:11:16.910 ************************************ 00:11:16.910 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:17.168 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:17.168 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4160467 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4160467 ']' 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4160467 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4160467 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4160467' 00:11:17.427 killing process with pid 4160467 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 4160467 00:11:17.427 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 4160467 00:11:17.686 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:17.686 00:11:17.686 real 0m15.683s 00:11:17.686 user 1m1.735s 00:11:17.686 sys 0m1.343s 00:11:17.686 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.686 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.686 ************************************ 00:11:17.686 END TEST nvmf_filesystem_in_capsule 00:11:17.686 ************************************ 00:11:17.686 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:17.686 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.686 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:17.686 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.686 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:17.686 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.686 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.686 rmmod nvme_tcp 00:11:17.686 rmmod nvme_fabrics 00:11:17.686 rmmod nvme_keyring 00:11:17.686 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.946 03:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.853 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.853 00:11:19.853 real 0m40.907s 00:11:19.853 user 2m8.847s 00:11:19.853 sys 0m7.425s 00:11:19.853 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.853 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.853 ************************************ 00:11:19.853 END TEST nvmf_filesystem 00:11:19.853 ************************************ 00:11:19.853 03:58:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:19.853 03:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.853 03:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.853 03:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.853 ************************************ 00:11:19.853 START TEST nvmf_target_discovery 00:11:19.853 ************************************ 00:11:19.853 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:20.114 * Looking for test storage... 00:11:20.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:20.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.114 --rc genhtml_branch_coverage=1 00:11:20.114 --rc genhtml_function_coverage=1 00:11:20.114 --rc genhtml_legend=1 00:11:20.114 --rc geninfo_all_blocks=1 00:11:20.114 --rc geninfo_unexecuted_blocks=1 00:11:20.114 00:11:20.114 ' 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:20.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.114 --rc genhtml_branch_coverage=1 00:11:20.114 --rc genhtml_function_coverage=1 00:11:20.114 --rc genhtml_legend=1 00:11:20.114 --rc geninfo_all_blocks=1 00:11:20.114 --rc geninfo_unexecuted_blocks=1 00:11:20.114 00:11:20.114 ' 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:20.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.114 --rc genhtml_branch_coverage=1 00:11:20.114 --rc genhtml_function_coverage=1 00:11:20.114 --rc genhtml_legend=1 00:11:20.114 --rc geninfo_all_blocks=1 00:11:20.114 --rc geninfo_unexecuted_blocks=1 00:11:20.114 00:11:20.114 ' 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:20.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.114 --rc genhtml_branch_coverage=1 00:11:20.114 --rc genhtml_function_coverage=1 00:11:20.114 --rc genhtml_legend=1 00:11:20.114 --rc geninfo_all_blocks=1 00:11:20.114 --rc geninfo_unexecuted_blocks=1 00:11:20.114 00:11:20.114 ' 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.114 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.115 03:58:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:26.692 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:26.692 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:26.692 Found net devices under 0000:af:00.0: cvl_0_0 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:26.692 Found net devices under 0000:af:00.1: cvl_0_1 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.692 03:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.692 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.692 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.692 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.692 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.692 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:11:26.693 00:11:26.693 --- 10.0.0.2 ping statistics --- 00:11:26.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.693 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:11:26.693 00:11:26.693 --- 10.0.0.1 ping statistics --- 00:11:26.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.693 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=4166834 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 4166834 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 4166834 ']' 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 [2024-12-10 03:58:25.300863] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:11:26.693 [2024-12-10 03:58:25.300905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.693 [2024-12-10 03:58:25.383334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.693 [2024-12-10 03:58:25.423769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.693 [2024-12-10 03:58:25.423804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.693 [2024-12-10 03:58:25.423812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.693 [2024-12-10 03:58:25.423818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.693 [2024-12-10 03:58:25.423823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.693 [2024-12-10 03:58:25.425259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.693 [2024-12-10 03:58:25.425362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.693 [2024-12-10 03:58:25.425470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.693 [2024-12-10 03:58:25.425471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 [2024-12-10 03:58:25.561973] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 Null1 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 [2024-12-10 03:58:25.615304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 Null2 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 Null3 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.693 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.694 Null4 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:26.694 00:11:26.694 Discovery Log Number of Records 6, Generation counter 6 00:11:26.694 =====Discovery Log Entry 0====== 00:11:26.694 trtype: tcp 00:11:26.694 adrfam: ipv4 00:11:26.694 subtype: current discovery subsystem 00:11:26.694 treq: not required 00:11:26.694 portid: 0 00:11:26.694 trsvcid: 4420 00:11:26.694 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.694 traddr: 10.0.0.2 00:11:26.694 eflags: explicit discovery connections, duplicate discovery information 00:11:26.694 sectype: none 00:11:26.694 =====Discovery Log Entry 1====== 00:11:26.694 trtype: tcp 00:11:26.694 adrfam: ipv4 00:11:26.694 subtype: nvme subsystem 00:11:26.694 treq: not required 00:11:26.694 portid: 0 00:11:26.694 trsvcid: 4420 00:11:26.694 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:26.694 traddr: 10.0.0.2 00:11:26.694 eflags: none 00:11:26.694 sectype: none 00:11:26.694 =====Discovery Log Entry 2====== 00:11:26.694 trtype: tcp 00:11:26.694 adrfam: ipv4 00:11:26.694 subtype: nvme subsystem 00:11:26.694 treq: not required 00:11:26.694 portid: 0 00:11:26.694 trsvcid: 4420 00:11:26.694 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:26.694 traddr: 10.0.0.2 00:11:26.694 eflags: none 00:11:26.694 sectype: none 00:11:26.694 =====Discovery Log Entry 3====== 00:11:26.694 trtype: tcp 00:11:26.694 adrfam: ipv4 00:11:26.694 subtype: nvme subsystem 00:11:26.694 treq: not required 00:11:26.694 portid: 0 00:11:26.694 trsvcid: 4420 00:11:26.694 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:26.694 traddr: 10.0.0.2 00:11:26.694 eflags: none 00:11:26.694 sectype: none 00:11:26.694 =====Discovery Log Entry 4====== 00:11:26.694 trtype: tcp 00:11:26.694 adrfam: ipv4 00:11:26.694 subtype: nvme subsystem 00:11:26.694 treq: not required 00:11:26.694 portid: 0 00:11:26.694 trsvcid: 4420 00:11:26.694 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:26.694 traddr: 10.0.0.2 00:11:26.694 eflags: none 00:11:26.694 sectype: none 00:11:26.694 =====Discovery Log Entry 5====== 00:11:26.694 trtype: tcp 00:11:26.694 adrfam: ipv4 00:11:26.694 subtype: discovery subsystem referral 00:11:26.694 treq: not required 00:11:26.694 portid: 0 00:11:26.694 trsvcid: 4430 00:11:26.694 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.694 traddr: 10.0.0.2 00:11:26.694 eflags: none 00:11:26.694 sectype: none 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:26.694 Perform nvmf subsystem discovery via RPC 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.694 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.694 [ 00:11:26.694 { 00:11:26.694 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:26.694 "subtype": "Discovery", 00:11:26.694 "listen_addresses": [ 00:11:26.694 { 00:11:26.694 "trtype": "TCP", 00:11:26.694 "adrfam": "IPv4", 00:11:26.694 "traddr": "10.0.0.2", 00:11:26.694 "trsvcid": "4420" 00:11:26.694 } 00:11:26.694 ], 00:11:26.694 "allow_any_host": true, 00:11:26.694 "hosts": [] 00:11:26.694 }, 00:11:26.694 { 00:11:26.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.694 "subtype": "NVMe", 00:11:26.694 "listen_addresses": [ 00:11:26.694 { 00:11:26.694 "trtype": "TCP", 00:11:26.694 "adrfam": "IPv4", 00:11:26.694 "traddr": "10.0.0.2", 00:11:26.694 "trsvcid": "4420" 00:11:26.694 } 00:11:26.694 ], 00:11:26.694 "allow_any_host": true, 00:11:26.694 "hosts": [], 00:11:26.694 "serial_number": "SPDK00000000000001", 00:11:26.694 "model_number": "SPDK bdev Controller", 00:11:26.694 "max_namespaces": 32, 00:11:26.694 "min_cntlid": 1, 00:11:26.694 "max_cntlid": 65519, 00:11:26.694 "namespaces": [ 00:11:26.694 { 00:11:26.694 "nsid": 1, 00:11:26.694 "bdev_name": "Null1", 00:11:26.694 "name": "Null1", 00:11:26.694 "nguid": "82235E64353E454FB89F70F86C552174", 00:11:26.694 "uuid": "82235e64-353e-454f-b89f-70f86c552174" 00:11:26.694 } 00:11:26.694 ] 00:11:26.694 }, 00:11:26.694 { 00:11:26.694 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:26.694 "subtype": "NVMe", 00:11:26.694 "listen_addresses": [ 00:11:26.694 { 00:11:26.694 "trtype": "TCP", 00:11:26.694 "adrfam": "IPv4", 00:11:26.694 "traddr": "10.0.0.2", 00:11:26.694 "trsvcid": "4420" 00:11:26.694 } 00:11:26.694 ], 00:11:26.694 "allow_any_host": true, 00:11:26.694 "hosts": [], 00:11:26.694 "serial_number": "SPDK00000000000002", 00:11:26.953 "model_number": "SPDK bdev Controller", 00:11:26.953 "max_namespaces": 32, 00:11:26.953 "min_cntlid": 1, 00:11:26.953 "max_cntlid": 65519, 00:11:26.953 "namespaces": [ 00:11:26.953 { 00:11:26.953 "nsid": 1, 00:11:26.953 "bdev_name": "Null2", 00:11:26.953 "name": "Null2", 00:11:26.953 "nguid": "9573E79FC581464E98092577730C5380", 00:11:26.953 "uuid": "9573e79f-c581-464e-9809-2577730c5380" 00:11:26.953 } 00:11:26.953 ] 00:11:26.953 }, 00:11:26.953 { 00:11:26.953 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:26.953 "subtype": "NVMe", 00:11:26.953 "listen_addresses": [ 00:11:26.953 { 00:11:26.953 "trtype": "TCP", 00:11:26.953 "adrfam": "IPv4", 00:11:26.953 "traddr": "10.0.0.2", 00:11:26.953 "trsvcid": "4420" 00:11:26.953 } 00:11:26.953 ], 00:11:26.953 "allow_any_host": true, 00:11:26.953 "hosts": [], 00:11:26.953 "serial_number": "SPDK00000000000003", 00:11:26.953 "model_number": "SPDK bdev Controller", 00:11:26.953 "max_namespaces": 32, 00:11:26.953 "min_cntlid": 1, 00:11:26.953 "max_cntlid": 65519, 00:11:26.953 "namespaces": [ 00:11:26.953 { 00:11:26.953 "nsid": 1, 00:11:26.953 "bdev_name": "Null3", 00:11:26.953 "name": "Null3", 00:11:26.953 "nguid": "870B39E221A54343B6630114631EA722", 00:11:26.953 "uuid": "870b39e2-21a5-4343-b663-0114631ea722" 00:11:26.953 } 00:11:26.953 ] 00:11:26.953 }, 00:11:26.953 { 00:11:26.953 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:26.953 "subtype": "NVMe", 00:11:26.953 "listen_addresses": [ 00:11:26.953 { 00:11:26.953 "trtype": "TCP", 00:11:26.953 "adrfam": "IPv4", 00:11:26.953 "traddr": "10.0.0.2", 00:11:26.953 "trsvcid": "4420" 00:11:26.953 } 00:11:26.953 ], 00:11:26.953 "allow_any_host": true, 00:11:26.953 "hosts": [], 00:11:26.953 "serial_number": "SPDK00000000000004", 00:11:26.954 "model_number": "SPDK bdev Controller", 00:11:26.954 "max_namespaces": 32, 00:11:26.954 "min_cntlid": 1, 00:11:26.954 "max_cntlid": 65519, 00:11:26.954 "namespaces": [ 00:11:26.954 { 00:11:26.954 "nsid": 1, 00:11:26.954 "bdev_name": "Null4", 00:11:26.954 "name": "Null4", 00:11:26.954 "nguid": "DBE0A6E07C404532947C37B942B4A0B0", 00:11:26.954 "uuid": "dbe0a6e0-7c40-4532-947c-37b942b4a0b0" 00:11:26.954 } 00:11:26.954 ] 00:11:26.954 } 00:11:26.954 ] 00:11:26.954 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.954 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:26.954 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.954 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.954 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.954 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.954 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.954 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:26.954 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.954 03:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.954 rmmod nvme_tcp 00:11:26.954 rmmod nvme_fabrics 00:11:26.954 rmmod nvme_keyring 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 4166834 ']' 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 4166834 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 4166834 ']' 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 4166834 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4166834 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4166834' 00:11:26.954 killing process with pid 4166834 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 4166834 00:11:26.954 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 4166834 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.214 03:58:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.752 00:11:29.752 real 0m9.318s 00:11:29.752 user 0m5.662s 00:11:29.752 sys 0m4.780s 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.752 ************************************ 00:11:29.752 END TEST nvmf_target_discovery 00:11:29.752 ************************************ 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.752 ************************************ 00:11:29.752 START TEST nvmf_referrals 00:11:29.752 ************************************ 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.752 * Looking for test storage... 00:11:29.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:29.752 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:29.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.753 --rc genhtml_branch_coverage=1 00:11:29.753 --rc genhtml_function_coverage=1 00:11:29.753 --rc genhtml_legend=1 00:11:29.753 --rc geninfo_all_blocks=1 00:11:29.753 --rc geninfo_unexecuted_blocks=1 00:11:29.753 00:11:29.753 ' 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:29.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.753 --rc genhtml_branch_coverage=1 00:11:29.753 --rc genhtml_function_coverage=1 00:11:29.753 --rc genhtml_legend=1 00:11:29.753 --rc geninfo_all_blocks=1 00:11:29.753 --rc geninfo_unexecuted_blocks=1 00:11:29.753 00:11:29.753 ' 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:29.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.753 --rc genhtml_branch_coverage=1 00:11:29.753 --rc genhtml_function_coverage=1 00:11:29.753 --rc genhtml_legend=1 00:11:29.753 --rc geninfo_all_blocks=1 00:11:29.753 --rc geninfo_unexecuted_blocks=1 00:11:29.753 00:11:29.753 ' 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:29.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.753 --rc genhtml_branch_coverage=1 00:11:29.753 --rc genhtml_function_coverage=1 00:11:29.753 --rc genhtml_legend=1 00:11:29.753 --rc geninfo_all_blocks=1 00:11:29.753 --rc geninfo_unexecuted_blocks=1 00:11:29.753 00:11:29.753 ' 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.753 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.754 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.754 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.754 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.754 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.754 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.754 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.754 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.754 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.754 03:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:36.324 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:36.324 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:36.324 Found net devices under 0000:af:00.0: cvl_0_0 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:36.324 Found net devices under 0000:af:00.1: cvl_0_1 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.324 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:11:36.324 00:11:36.324 --- 10.0.0.2 ping statistics --- 00:11:36.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.324 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:11:36.325 00:11:36.325 --- 10.0.0.1 ping statistics --- 00:11:36.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.325 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=4170399 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 4170399 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 4170399 ']' 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.325 03:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 [2024-12-10 03:58:34.787623] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:11:36.325 [2024-12-10 03:58:34.787670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.325 [2024-12-10 03:58:34.866962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.325 [2024-12-10 03:58:34.907257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.325 [2024-12-10 03:58:34.907295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.325 [2024-12-10 03:58:34.907302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.325 [2024-12-10 03:58:34.907308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.325 [2024-12-10 03:58:34.907316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.325 [2024-12-10 03:58:34.908670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.325 [2024-12-10 03:58:34.908777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.325 [2024-12-10 03:58:34.908883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.325 [2024-12-10 03:58:34.908885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 [2024-12-10 03:58:35.058509] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 [2024-12-10 03:58:35.089351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.326 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.326 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:36.326 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:36.326 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:36.326 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:36.326 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:36.326 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:36.326 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:36.584 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:36.842 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:36.842 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:36.842 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:36.842 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:36.842 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:36.842 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:36.842 03:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:37.100 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:37.358 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:37.358 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:37.358 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:37.358 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:37.358 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:37.358 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.358 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:37.616 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:37.616 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:37.616 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:37.616 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:37.616 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.616 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:37.874 03:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.134 rmmod nvme_tcp 00:11:38.134 rmmod nvme_fabrics 00:11:38.134 rmmod nvme_keyring 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 4170399 ']' 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 4170399 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 4170399 ']' 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 4170399 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4170399 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4170399' 00:11:38.134 killing process with pid 4170399 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 4170399 00:11:38.134 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 4170399 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.394 03:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.299 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.299 00:11:40.299 real 0m11.013s 00:11:40.299 user 0m12.732s 00:11:40.299 sys 0m5.222s 00:11:40.299 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.299 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.299 ************************************ 00:11:40.299 END TEST nvmf_referrals 00:11:40.299 ************************************ 00:11:40.299 03:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:40.299 03:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:40.299 03:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.299 03:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.559 ************************************ 00:11:40.559 START TEST nvmf_connect_disconnect 00:11:40.559 ************************************ 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:40.559 * Looking for test storage... 00:11:40.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:40.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.559 --rc genhtml_branch_coverage=1 00:11:40.559 --rc genhtml_function_coverage=1 00:11:40.559 --rc genhtml_legend=1 00:11:40.559 --rc geninfo_all_blocks=1 00:11:40.559 --rc geninfo_unexecuted_blocks=1 00:11:40.559 00:11:40.559 ' 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:40.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.559 --rc genhtml_branch_coverage=1 00:11:40.559 --rc genhtml_function_coverage=1 00:11:40.559 --rc genhtml_legend=1 00:11:40.559 --rc geninfo_all_blocks=1 00:11:40.559 --rc geninfo_unexecuted_blocks=1 00:11:40.559 00:11:40.559 ' 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:40.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.559 --rc genhtml_branch_coverage=1 00:11:40.559 --rc genhtml_function_coverage=1 00:11:40.559 --rc genhtml_legend=1 00:11:40.559 --rc geninfo_all_blocks=1 00:11:40.559 --rc geninfo_unexecuted_blocks=1 00:11:40.559 00:11:40.559 ' 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:40.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.559 --rc genhtml_branch_coverage=1 00:11:40.559 --rc genhtml_function_coverage=1 00:11:40.559 --rc genhtml_legend=1 00:11:40.559 --rc geninfo_all_blocks=1 00:11:40.559 --rc geninfo_unexecuted_blocks=1 00:11:40.559 00:11:40.559 ' 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.559 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.560 03:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:47.138 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:47.138 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:47.138 Found net devices under 0000:af:00.0: cvl_0_0 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:47.138 Found net devices under 0000:af:00.1: cvl_0_1 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.138 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:47.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:11:47.139 00:11:47.139 --- 10.0.0.2 ping statistics --- 00:11:47.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.139 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:11:47.139 00:11:47.139 --- 10.0.0.1 ping statistics --- 00:11:47.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.139 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=4174417 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 4174417 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 4174417 ']' 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.139 03:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.139 [2024-12-10 03:58:45.904300] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:11:47.139 [2024-12-10 03:58:45.904356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.139 [2024-12-10 03:58:45.983376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.139 [2024-12-10 03:58:46.024316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.139 [2024-12-10 03:58:46.024352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.139 [2024-12-10 03:58:46.024360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.139 [2024-12-10 03:58:46.024367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.139 [2024-12-10 03:58:46.024373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.139 [2024-12-10 03:58:46.025726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.139 [2024-12-10 03:58:46.025836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.139 [2024-12-10 03:58:46.025944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.139 [2024-12-10 03:58:46.025946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.139 [2024-12-10 03:58:46.175503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.139 [2024-12-10 03:58:46.245413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:47.139 03:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:50.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.548 rmmod nvme_tcp 00:12:03.548 rmmod nvme_fabrics 00:12:03.548 rmmod nvme_keyring 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 4174417 ']' 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 4174417 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 4174417 ']' 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 4174417 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4174417 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4174417' 00:12:03.548 killing process with pid 4174417 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 4174417 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 4174417 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.548 03:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.453 00:12:05.453 real 0m25.135s 00:12:05.453 user 1m7.772s 00:12:05.453 sys 0m5.767s 00:12:05.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.453 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.453 ************************************ 00:12:05.453 END TEST nvmf_connect_disconnect 00:12:05.453 ************************************ 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 ************************************ 00:12:05.712 START TEST nvmf_multitarget 00:12:05.712 ************************************ 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:05.712 * Looking for test storage... 00:12:05.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.712 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:05.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.713 --rc genhtml_branch_coverage=1 00:12:05.713 --rc genhtml_function_coverage=1 00:12:05.713 --rc genhtml_legend=1 00:12:05.713 --rc geninfo_all_blocks=1 00:12:05.713 --rc geninfo_unexecuted_blocks=1 00:12:05.713 00:12:05.713 ' 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:05.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.713 --rc genhtml_branch_coverage=1 00:12:05.713 --rc genhtml_function_coverage=1 00:12:05.713 --rc genhtml_legend=1 00:12:05.713 --rc geninfo_all_blocks=1 00:12:05.713 --rc geninfo_unexecuted_blocks=1 00:12:05.713 00:12:05.713 ' 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:05.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.713 --rc genhtml_branch_coverage=1 00:12:05.713 --rc genhtml_function_coverage=1 00:12:05.713 --rc genhtml_legend=1 00:12:05.713 --rc geninfo_all_blocks=1 00:12:05.713 --rc geninfo_unexecuted_blocks=1 00:12:05.713 00:12:05.713 ' 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:05.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.713 --rc genhtml_branch_coverage=1 00:12:05.713 --rc genhtml_function_coverage=1 00:12:05.713 --rc genhtml_legend=1 00:12:05.713 --rc geninfo_all_blocks=1 00:12:05.713 --rc geninfo_unexecuted_blocks=1 00:12:05.713 00:12:05.713 ' 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:05.713 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.972 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.972 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.972 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.972 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.972 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.972 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.972 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.972 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.972 03:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.972 03:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.393 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:11.394 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:11.394 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:11.394 Found net devices under 0000:af:00.0: cvl_0_0 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:11.394 Found net devices under 0000:af:00.1: cvl_0_1 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.394 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.653 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.653 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.653 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.653 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.653 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.653 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.653 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.653 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:12:11.653 00:12:11.653 --- 10.0.0.2 ping statistics --- 00:12:11.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.653 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:12:11.653 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:12:11.912 00:12:11.912 --- 10.0.0.1 ping statistics --- 00:12:11.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.912 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=4180760 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 4180760 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 4180760 ']' 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.912 03:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.912 [2024-12-10 03:59:11.038708] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:11.912 [2024-12-10 03:59:11.038761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.912 [2024-12-10 03:59:11.118921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.912 [2024-12-10 03:59:11.158765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.912 [2024-12-10 03:59:11.158801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.912 [2024-12-10 03:59:11.158808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.912 [2024-12-10 03:59:11.158825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.912 [2024-12-10 03:59:11.158831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.912 [2024-12-10 03:59:11.160243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.912 [2024-12-10 03:59:11.160284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.912 [2024-12-10 03:59:11.160389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.912 [2024-12-10 03:59:11.160390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.170 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.170 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:12.170 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.170 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.170 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:12.170 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.170 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:12.170 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:12.170 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:12.170 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:12.171 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:12.429 "nvmf_tgt_1" 00:12:12.429 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:12.429 "nvmf_tgt_2" 00:12:12.429 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:12.429 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:12.686 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:12.686 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:12.686 true 00:12:12.686 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:12.686 true 00:12:12.686 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:12.686 03:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.944 rmmod nvme_tcp 00:12:12.944 rmmod nvme_fabrics 00:12:12.944 rmmod nvme_keyring 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 4180760 ']' 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 4180760 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 4180760 ']' 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 4180760 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4180760 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4180760' 00:12:12.944 killing process with pid 4180760 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 4180760 00:12:12.944 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 4180760 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.203 03:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.110 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:15.110 00:12:15.110 real 0m9.577s 00:12:15.110 user 0m7.088s 00:12:15.110 sys 0m4.869s 00:12:15.110 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.110 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.110 ************************************ 00:12:15.110 END TEST nvmf_multitarget 00:12:15.110 ************************************ 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.370 ************************************ 00:12:15.370 START TEST nvmf_rpc 00:12:15.370 ************************************ 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:15.370 * Looking for test storage... 00:12:15.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:15.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.370 --rc genhtml_branch_coverage=1 00:12:15.370 --rc genhtml_function_coverage=1 00:12:15.370 --rc genhtml_legend=1 00:12:15.370 --rc geninfo_all_blocks=1 00:12:15.370 --rc geninfo_unexecuted_blocks=1 00:12:15.370 00:12:15.370 ' 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:15.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.370 --rc genhtml_branch_coverage=1 00:12:15.370 --rc genhtml_function_coverage=1 00:12:15.370 --rc genhtml_legend=1 00:12:15.370 --rc geninfo_all_blocks=1 00:12:15.370 --rc geninfo_unexecuted_blocks=1 00:12:15.370 00:12:15.370 ' 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:15.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.370 --rc genhtml_branch_coverage=1 00:12:15.370 --rc genhtml_function_coverage=1 00:12:15.370 --rc genhtml_legend=1 00:12:15.370 --rc geninfo_all_blocks=1 00:12:15.370 --rc geninfo_unexecuted_blocks=1 00:12:15.370 00:12:15.370 ' 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:15.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.370 --rc genhtml_branch_coverage=1 00:12:15.370 --rc genhtml_function_coverage=1 00:12:15.370 --rc genhtml_legend=1 00:12:15.370 --rc geninfo_all_blocks=1 00:12:15.370 --rc geninfo_unexecuted_blocks=1 00:12:15.370 00:12:15.370 ' 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:15.370 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.630 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.630 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.630 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.630 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.630 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.630 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.630 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.630 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.630 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.630 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.631 03:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:22.202 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:22.202 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:22.202 Found net devices under 0000:af:00.0: cvl_0_0 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:22.202 Found net devices under 0000:af:00.1: cvl_0_1 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.202 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:12:22.203 00:12:22.203 --- 10.0.0.2 ping statistics --- 00:12:22.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.203 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:12:22.203 00:12:22.203 --- 10.0.0.1 ping statistics --- 00:12:22.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.203 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=4184364 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 4184364 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 4184364 ']' 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.203 [2024-12-10 03:59:20.635310] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:22.203 [2024-12-10 03:59:20.635360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.203 [2024-12-10 03:59:20.715092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.203 [2024-12-10 03:59:20.756321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.203 [2024-12-10 03:59:20.756358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.203 [2024-12-10 03:59:20.756365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.203 [2024-12-10 03:59:20.756371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.203 [2024-12-10 03:59:20.756377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.203 [2024-12-10 03:59:20.757848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.203 [2024-12-10 03:59:20.757959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.203 [2024-12-10 03:59:20.758068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.203 [2024-12-10 03:59:20.758069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:22.203 "tick_rate": 2100000000, 00:12:22.203 "poll_groups": [ 00:12:22.203 { 00:12:22.203 "name": "nvmf_tgt_poll_group_000", 00:12:22.203 "admin_qpairs": 0, 00:12:22.203 "io_qpairs": 0, 00:12:22.203 "current_admin_qpairs": 0, 00:12:22.203 "current_io_qpairs": 0, 00:12:22.203 "pending_bdev_io": 0, 00:12:22.203 "completed_nvme_io": 0, 00:12:22.203 "transports": [] 00:12:22.203 }, 00:12:22.203 { 00:12:22.203 "name": "nvmf_tgt_poll_group_001", 00:12:22.203 "admin_qpairs": 0, 00:12:22.203 "io_qpairs": 0, 00:12:22.203 "current_admin_qpairs": 0, 00:12:22.203 "current_io_qpairs": 0, 00:12:22.203 "pending_bdev_io": 0, 00:12:22.203 "completed_nvme_io": 0, 00:12:22.203 "transports": [] 00:12:22.203 }, 00:12:22.203 { 00:12:22.203 "name": "nvmf_tgt_poll_group_002", 00:12:22.203 "admin_qpairs": 0, 00:12:22.203 "io_qpairs": 0, 00:12:22.203 "current_admin_qpairs": 0, 00:12:22.203 "current_io_qpairs": 0, 00:12:22.203 "pending_bdev_io": 0, 00:12:22.203 "completed_nvme_io": 0, 00:12:22.203 "transports": [] 00:12:22.203 }, 00:12:22.203 { 00:12:22.203 "name": "nvmf_tgt_poll_group_003", 00:12:22.203 "admin_qpairs": 0, 00:12:22.203 "io_qpairs": 0, 00:12:22.203 "current_admin_qpairs": 0, 00:12:22.203 "current_io_qpairs": 0, 00:12:22.203 "pending_bdev_io": 0, 00:12:22.203 "completed_nvme_io": 0, 00:12:22.203 "transports": [] 00:12:22.203 } 00:12:22.203 ] 00:12:22.203 }' 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.203 03:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.203 [2024-12-10 03:59:20.996315] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.203 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.203 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:22.203 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.203 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.203 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.203 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:22.203 "tick_rate": 2100000000, 00:12:22.203 "poll_groups": [ 00:12:22.203 { 00:12:22.203 "name": "nvmf_tgt_poll_group_000", 00:12:22.203 "admin_qpairs": 0, 00:12:22.203 "io_qpairs": 0, 00:12:22.203 "current_admin_qpairs": 0, 00:12:22.203 "current_io_qpairs": 0, 00:12:22.203 "pending_bdev_io": 0, 00:12:22.203 "completed_nvme_io": 0, 00:12:22.203 "transports": [ 00:12:22.203 { 00:12:22.203 "trtype": "TCP" 00:12:22.203 } 00:12:22.203 ] 00:12:22.203 }, 00:12:22.203 { 00:12:22.203 "name": "nvmf_tgt_poll_group_001", 00:12:22.203 "admin_qpairs": 0, 00:12:22.203 "io_qpairs": 0, 00:12:22.203 "current_admin_qpairs": 0, 00:12:22.203 "current_io_qpairs": 0, 00:12:22.203 "pending_bdev_io": 0, 00:12:22.203 "completed_nvme_io": 0, 00:12:22.203 "transports": [ 00:12:22.203 { 00:12:22.203 "trtype": "TCP" 00:12:22.203 } 00:12:22.203 ] 00:12:22.203 }, 00:12:22.203 { 00:12:22.203 "name": "nvmf_tgt_poll_group_002", 00:12:22.203 "admin_qpairs": 0, 00:12:22.204 "io_qpairs": 0, 00:12:22.204 "current_admin_qpairs": 0, 00:12:22.204 "current_io_qpairs": 0, 00:12:22.204 "pending_bdev_io": 0, 00:12:22.204 "completed_nvme_io": 0, 00:12:22.204 "transports": [ 00:12:22.204 { 00:12:22.204 "trtype": "TCP" 00:12:22.204 } 00:12:22.204 ] 00:12:22.204 }, 00:12:22.204 { 00:12:22.204 "name": "nvmf_tgt_poll_group_003", 00:12:22.204 "admin_qpairs": 0, 00:12:22.204 "io_qpairs": 0, 00:12:22.204 "current_admin_qpairs": 0, 00:12:22.204 "current_io_qpairs": 0, 00:12:22.204 "pending_bdev_io": 0, 00:12:22.204 "completed_nvme_io": 0, 00:12:22.204 "transports": [ 00:12:22.204 { 00:12:22.204 "trtype": "TCP" 00:12:22.204 } 00:12:22.204 ] 00:12:22.204 } 00:12:22.204 ] 00:12:22.204 }' 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.204 Malloc1 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.204 [2024-12-10 03:59:21.191774] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:22.204 [2024-12-10 03:59:21.226379] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:12:22.204 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:22.204 could not add new controller: failed to write to nvme-fabrics device 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.204 03:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.581 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.582 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:23.582 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.582 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:23.582 03:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:25.489 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.490 [2024-12-10 03:59:24.601975] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:12:25.490 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:25.490 could not add new controller: failed to write to nvme-fabrics device 00:12:25.490 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:25.490 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:25.490 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:25.490 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:25.490 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:25.490 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.490 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.490 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.490 03:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.868 03:59:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.868 03:59:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:26.868 03:59:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.868 03:59:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:26.868 03:59:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.774 [2024-12-10 03:59:27.916062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.774 03:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.151 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.151 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:30.151 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.151 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:30.151 03:59:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.055 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.056 [2024-12-10 03:59:31.257276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.056 03:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.433 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.433 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:33.433 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.433 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:33.433 03:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:35.339 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 [2024-12-10 03:59:34.659605] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.598 03:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.534 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.534 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:36.534 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.534 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:36.534 03:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.067 [2024-12-10 03:59:37.918503] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.067 03:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.004 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.004 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:40.004 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.004 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:40.004 03:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:41.907 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:41.907 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:41.907 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.907 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:41.908 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.908 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:41.908 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.167 [2024-12-10 03:59:41.371651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.167 03:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.545 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.545 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:43.545 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.545 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:43.545 03:59:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.450 [2024-12-10 03:59:44.679280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.450 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.451 [2024-12-10 03:59:44.727396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.451 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 [2024-12-10 03:59:44.775519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 [2024-12-10 03:59:44.823688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.710 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.711 [2024-12-10 03:59:44.871842] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:45.711 "tick_rate": 2100000000, 00:12:45.711 "poll_groups": [ 00:12:45.711 { 00:12:45.711 "name": "nvmf_tgt_poll_group_000", 00:12:45.711 "admin_qpairs": 2, 00:12:45.711 "io_qpairs": 168, 00:12:45.711 "current_admin_qpairs": 0, 00:12:45.711 "current_io_qpairs": 0, 00:12:45.711 "pending_bdev_io": 0, 00:12:45.711 "completed_nvme_io": 317, 00:12:45.711 "transports": [ 00:12:45.711 { 00:12:45.711 "trtype": "TCP" 00:12:45.711 } 00:12:45.711 ] 00:12:45.711 }, 00:12:45.711 { 00:12:45.711 "name": "nvmf_tgt_poll_group_001", 00:12:45.711 "admin_qpairs": 2, 00:12:45.711 "io_qpairs": 168, 00:12:45.711 "current_admin_qpairs": 0, 00:12:45.711 "current_io_qpairs": 0, 00:12:45.711 "pending_bdev_io": 0, 00:12:45.711 "completed_nvme_io": 219, 00:12:45.711 "transports": [ 00:12:45.711 { 00:12:45.711 "trtype": "TCP" 00:12:45.711 } 00:12:45.711 ] 00:12:45.711 }, 00:12:45.711 { 00:12:45.711 "name": "nvmf_tgt_poll_group_002", 00:12:45.711 "admin_qpairs": 1, 00:12:45.711 "io_qpairs": 168, 00:12:45.711 "current_admin_qpairs": 0, 00:12:45.711 "current_io_qpairs": 0, 00:12:45.711 "pending_bdev_io": 0, 00:12:45.711 "completed_nvme_io": 218, 00:12:45.711 "transports": [ 00:12:45.711 { 00:12:45.711 "trtype": "TCP" 00:12:45.711 } 00:12:45.711 ] 00:12:45.711 }, 00:12:45.711 { 00:12:45.711 "name": "nvmf_tgt_poll_group_003", 00:12:45.711 "admin_qpairs": 2, 00:12:45.711 "io_qpairs": 168, 00:12:45.711 "current_admin_qpairs": 0, 00:12:45.711 "current_io_qpairs": 0, 00:12:45.711 "pending_bdev_io": 0, 00:12:45.711 "completed_nvme_io": 268, 00:12:45.711 "transports": [ 00:12:45.711 { 00:12:45.711 "trtype": "TCP" 00:12:45.711 } 00:12:45.711 ] 00:12:45.711 } 00:12:45.711 ] 00:12:45.711 }' 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:45.711 03:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.975 rmmod nvme_tcp 00:12:45.975 rmmod nvme_fabrics 00:12:45.975 rmmod nvme_keyring 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 4184364 ']' 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 4184364 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 4184364 ']' 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 4184364 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4184364 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4184364' 00:12:45.975 killing process with pid 4184364 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 4184364 00:12:45.975 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 4184364 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.235 03:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.140 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:48.140 00:12:48.140 real 0m32.920s 00:12:48.140 user 1m39.481s 00:12:48.140 sys 0m6.461s 00:12:48.140 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.140 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.140 ************************************ 00:12:48.140 END TEST nvmf_rpc 00:12:48.140 ************************************ 00:12:48.140 03:59:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:48.140 03:59:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:48.140 03:59:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.140 03:59:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.399 ************************************ 00:12:48.399 START TEST nvmf_invalid 00:12:48.399 ************************************ 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:48.399 * Looking for test storage... 00:12:48.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:48.399 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:48.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.400 --rc genhtml_branch_coverage=1 00:12:48.400 --rc genhtml_function_coverage=1 00:12:48.400 --rc genhtml_legend=1 00:12:48.400 --rc geninfo_all_blocks=1 00:12:48.400 --rc geninfo_unexecuted_blocks=1 00:12:48.400 00:12:48.400 ' 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:48.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.400 --rc genhtml_branch_coverage=1 00:12:48.400 --rc genhtml_function_coverage=1 00:12:48.400 --rc genhtml_legend=1 00:12:48.400 --rc geninfo_all_blocks=1 00:12:48.400 --rc geninfo_unexecuted_blocks=1 00:12:48.400 00:12:48.400 ' 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:48.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.400 --rc genhtml_branch_coverage=1 00:12:48.400 --rc genhtml_function_coverage=1 00:12:48.400 --rc genhtml_legend=1 00:12:48.400 --rc geninfo_all_blocks=1 00:12:48.400 --rc geninfo_unexecuted_blocks=1 00:12:48.400 00:12:48.400 ' 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:48.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.400 --rc genhtml_branch_coverage=1 00:12:48.400 --rc genhtml_function_coverage=1 00:12:48.400 --rc genhtml_legend=1 00:12:48.400 --rc geninfo_all_blocks=1 00:12:48.400 --rc geninfo_unexecuted_blocks=1 00:12:48.400 00:12:48.400 ' 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:48.400 03:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.976 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.976 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.976 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.976 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.976 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.976 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.976 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.976 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.976 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.976 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:54.977 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:54.977 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:54.977 Found net devices under 0000:af:00.0: cvl_0_0 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:54.977 Found net devices under 0000:af:00.1: cvl_0_1 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:12:54.977 00:12:54.977 --- 10.0.0.2 ping statistics --- 00:12:54.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.977 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:12:54.977 00:12:54.977 --- 10.0.0.1 ping statistics --- 00:12:54.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.977 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.977 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=4191969 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 4191969 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 4191969 ']' 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.978 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.978 [2024-12-10 03:59:53.646421] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:54.978 [2024-12-10 03:59:53.646463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.978 [2024-12-10 03:59:53.722240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.978 [2024-12-10 03:59:53.763498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.978 [2024-12-10 03:59:53.763535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.978 [2024-12-10 03:59:53.763542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.978 [2024-12-10 03:59:53.763547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.978 [2024-12-10 03:59:53.763552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.978 [2024-12-10 03:59:53.767187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.978 [2024-12-10 03:59:53.767221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.978 [2024-12-10 03:59:53.767330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.978 [2024-12-10 03:59:53.767331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.236 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.237 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:55.237 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.237 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.237 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.237 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.504 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:55.504 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7454 00:12:55.504 [2024-12-10 03:59:54.687112] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:55.504 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:55.504 { 00:12:55.504 "nqn": "nqn.2016-06.io.spdk:cnode7454", 00:12:55.504 "tgt_name": "foobar", 00:12:55.504 "method": "nvmf_create_subsystem", 00:12:55.504 "req_id": 1 00:12:55.504 } 00:12:55.504 Got JSON-RPC error response 00:12:55.504 response: 00:12:55.504 { 00:12:55.504 "code": -32603, 00:12:55.504 "message": "Unable to find target foobar" 00:12:55.504 }' 00:12:55.504 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:55.504 { 00:12:55.504 "nqn": "nqn.2016-06.io.spdk:cnode7454", 00:12:55.504 "tgt_name": "foobar", 00:12:55.504 "method": "nvmf_create_subsystem", 00:12:55.504 "req_id": 1 00:12:55.504 } 00:12:55.504 Got JSON-RPC error response 00:12:55.504 response: 00:12:55.504 { 00:12:55.504 "code": -32603, 00:12:55.504 "message": "Unable to find target foobar" 00:12:55.504 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:55.504 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:55.504 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9337 00:12:55.762 [2024-12-10 03:59:54.891819] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9337: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:55.762 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:55.762 { 00:12:55.762 "nqn": "nqn.2016-06.io.spdk:cnode9337", 00:12:55.762 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:55.762 "method": "nvmf_create_subsystem", 00:12:55.762 "req_id": 1 00:12:55.762 } 00:12:55.763 Got JSON-RPC error response 00:12:55.763 response: 00:12:55.763 { 00:12:55.763 "code": -32602, 00:12:55.763 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:55.763 }' 00:12:55.763 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:55.763 { 00:12:55.763 "nqn": "nqn.2016-06.io.spdk:cnode9337", 00:12:55.763 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:55.763 "method": "nvmf_create_subsystem", 00:12:55.763 "req_id": 1 00:12:55.763 } 00:12:55.763 Got JSON-RPC error response 00:12:55.763 response: 00:12:55.763 { 00:12:55.763 "code": -32602, 00:12:55.763 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:55.763 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:55.763 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:55.763 03:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19962 00:12:56.022 [2024-12-10 03:59:55.084380] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19962: invalid model number 'SPDK_Controller' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:56.022 { 00:12:56.022 "nqn": "nqn.2016-06.io.spdk:cnode19962", 00:12:56.022 "model_number": "SPDK_Controller\u001f", 00:12:56.022 "method": "nvmf_create_subsystem", 00:12:56.022 "req_id": 1 00:12:56.022 } 00:12:56.022 Got JSON-RPC error response 00:12:56.022 response: 00:12:56.022 { 00:12:56.022 "code": -32602, 00:12:56.022 "message": "Invalid MN SPDK_Controller\u001f" 00:12:56.022 }' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:56.022 { 00:12:56.022 "nqn": "nqn.2016-06.io.spdk:cnode19962", 00:12:56.022 "model_number": "SPDK_Controller\u001f", 00:12:56.022 "method": "nvmf_create_subsystem", 00:12:56.022 "req_id": 1 00:12:56.022 } 00:12:56.022 Got JSON-RPC error response 00:12:56.022 response: 00:12:56.022 { 00:12:56.022 "code": -32602, 00:12:56.022 "message": "Invalid MN SPDK_Controller\u001f" 00:12:56.022 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.022 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'c@$X>s^Od&W8pT~6j$BHQ' 00:12:56.023 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'c@$X>s^Od&W8pT~6j$BHQ' nqn.2016-06.io.spdk:cnode13311 00:12:56.282 [2024-12-10 03:59:55.433510] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13311: invalid serial number 'c@$X>s^Od&W8pT~6j$BHQ' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:56.282 { 00:12:56.282 "nqn": "nqn.2016-06.io.spdk:cnode13311", 00:12:56.282 "serial_number": "c@$X>s^Od&W8pT~6j$BHQ", 00:12:56.282 "method": "nvmf_create_subsystem", 00:12:56.282 "req_id": 1 00:12:56.282 } 00:12:56.282 Got JSON-RPC error response 00:12:56.282 response: 00:12:56.282 { 00:12:56.282 "code": -32602, 00:12:56.282 "message": "Invalid SN c@$X>s^Od&W8pT~6j$BHQ" 00:12:56.282 }' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:56.282 { 00:12:56.282 "nqn": "nqn.2016-06.io.spdk:cnode13311", 00:12:56.282 "serial_number": "c@$X>s^Od&W8pT~6j$BHQ", 00:12:56.282 "method": "nvmf_create_subsystem", 00:12:56.282 "req_id": 1 00:12:56.282 } 00:12:56.282 Got JSON-RPC error response 00:12:56.282 response: 00:12:56.282 { 00:12:56.282 "code": -32602, 00:12:56.282 "message": "Invalid SN c@$X>s^Od&W8pT~6j$BHQ" 00:12:56.282 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.282 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:56.283 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:56.542 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'T92Yj<$WteP90h>Xo%#M=j%md'\''3i"M`j!DV0n;_t|' 00:12:56.543 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'T92Yj<$WteP90h>Xo%#M=j%md'\''3i"M`j!DV0n;_t|' nqn.2016-06.io.spdk:cnode17274 00:12:56.802 [2024-12-10 03:59:55.899010] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17274: invalid model number 'T92Yj<$WteP90h>Xo%#M=j%md'3i"M`j!DV0n;_t|' 00:12:56.802 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:56.802 { 00:12:56.802 "nqn": "nqn.2016-06.io.spdk:cnode17274", 00:12:56.802 "model_number": "T92Yj<$WteP90h>Xo%#M=j%md'\''3i\"M`j!DV0n;_t|", 00:12:56.802 "method": "nvmf_create_subsystem", 00:12:56.802 "req_id": 1 00:12:56.802 } 00:12:56.802 Got JSON-RPC error response 00:12:56.802 response: 00:12:56.802 { 00:12:56.802 "code": -32602, 00:12:56.802 "message": "Invalid MN T92Yj<$WteP90h>Xo%#M=j%md'\''3i\"M`j!DV0n;_t|" 00:12:56.802 }' 00:12:56.802 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:56.802 { 00:12:56.802 "nqn": "nqn.2016-06.io.spdk:cnode17274", 00:12:56.802 "model_number": "T92Yj<$WteP90h>Xo%#M=j%md'3i\"M`j!DV0n;_t|", 00:12:56.802 "method": "nvmf_create_subsystem", 00:12:56.802 "req_id": 1 00:12:56.802 } 00:12:56.802 Got JSON-RPC error response 00:12:56.802 response: 00:12:56.802 { 00:12:56.802 "code": -32602, 00:12:56.802 "message": "Invalid MN T92Yj<$WteP90h>Xo%#M=j%md'3i\"M`j!DV0n;_t|" 00:12:56.802 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:56.802 03:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:57.061 [2024-12-10 03:59:56.095737] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.061 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:57.061 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:57.061 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:57.061 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:57.061 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:57.061 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:57.319 [2024-12-10 03:59:56.501079] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:57.319 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:57.319 { 00:12:57.319 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:57.319 "listen_address": { 00:12:57.319 "trtype": "tcp", 00:12:57.319 "traddr": "", 00:12:57.319 "trsvcid": "4421" 00:12:57.319 }, 00:12:57.319 "method": "nvmf_subsystem_remove_listener", 00:12:57.319 "req_id": 1 00:12:57.319 } 00:12:57.319 Got JSON-RPC error response 00:12:57.319 response: 00:12:57.319 { 00:12:57.319 "code": -32602, 00:12:57.319 "message": "Invalid parameters" 00:12:57.319 }' 00:12:57.319 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:57.319 { 00:12:57.319 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:57.319 "listen_address": { 00:12:57.319 "trtype": "tcp", 00:12:57.319 "traddr": "", 00:12:57.319 "trsvcid": "4421" 00:12:57.319 }, 00:12:57.319 "method": "nvmf_subsystem_remove_listener", 00:12:57.319 "req_id": 1 00:12:57.319 } 00:12:57.319 Got JSON-RPC error response 00:12:57.319 response: 00:12:57.319 { 00:12:57.319 "code": -32602, 00:12:57.319 "message": "Invalid parameters" 00:12:57.319 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:57.319 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16288 -i 0 00:12:57.578 [2024-12-10 03:59:56.701704] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16288: invalid cntlid range [0-65519] 00:12:57.578 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:57.578 { 00:12:57.578 "nqn": "nqn.2016-06.io.spdk:cnode16288", 00:12:57.578 "min_cntlid": 0, 00:12:57.578 "method": "nvmf_create_subsystem", 00:12:57.578 "req_id": 1 00:12:57.578 } 00:12:57.578 Got JSON-RPC error response 00:12:57.578 response: 00:12:57.578 { 00:12:57.578 "code": -32602, 00:12:57.578 "message": "Invalid cntlid range [0-65519]" 00:12:57.578 }' 00:12:57.578 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:57.578 { 00:12:57.578 "nqn": "nqn.2016-06.io.spdk:cnode16288", 00:12:57.578 "min_cntlid": 0, 00:12:57.578 "method": "nvmf_create_subsystem", 00:12:57.578 "req_id": 1 00:12:57.578 } 00:12:57.578 Got JSON-RPC error response 00:12:57.578 response: 00:12:57.578 { 00:12:57.578 "code": -32602, 00:12:57.578 "message": "Invalid cntlid range [0-65519]" 00:12:57.578 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.578 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26210 -i 65520 00:12:57.836 [2024-12-10 03:59:56.906400] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26210: invalid cntlid range [65520-65519] 00:12:57.837 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:57.837 { 00:12:57.837 "nqn": "nqn.2016-06.io.spdk:cnode26210", 00:12:57.837 "min_cntlid": 65520, 00:12:57.837 "method": "nvmf_create_subsystem", 00:12:57.837 "req_id": 1 00:12:57.837 } 00:12:57.837 Got JSON-RPC error response 00:12:57.837 response: 00:12:57.837 { 00:12:57.837 "code": -32602, 00:12:57.837 "message": "Invalid cntlid range [65520-65519]" 00:12:57.837 }' 00:12:57.837 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:57.837 { 00:12:57.837 "nqn": "nqn.2016-06.io.spdk:cnode26210", 00:12:57.837 "min_cntlid": 65520, 00:12:57.837 "method": "nvmf_create_subsystem", 00:12:57.837 "req_id": 1 00:12:57.837 } 00:12:57.837 Got JSON-RPC error response 00:12:57.837 response: 00:12:57.837 { 00:12:57.837 "code": -32602, 00:12:57.837 "message": "Invalid cntlid range [65520-65519]" 00:12:57.837 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.837 03:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26476 -I 0 00:12:57.837 [2024-12-10 03:59:57.119161] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26476: invalid cntlid range [1-0] 00:12:58.095 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:58.095 { 00:12:58.095 "nqn": "nqn.2016-06.io.spdk:cnode26476", 00:12:58.095 "max_cntlid": 0, 00:12:58.095 "method": "nvmf_create_subsystem", 00:12:58.095 "req_id": 1 00:12:58.095 } 00:12:58.095 Got JSON-RPC error response 00:12:58.095 response: 00:12:58.095 { 00:12:58.095 "code": -32602, 00:12:58.095 "message": "Invalid cntlid range [1-0]" 00:12:58.095 }' 00:12:58.095 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:58.095 { 00:12:58.095 "nqn": "nqn.2016-06.io.spdk:cnode26476", 00:12:58.095 "max_cntlid": 0, 00:12:58.095 "method": "nvmf_create_subsystem", 00:12:58.095 "req_id": 1 00:12:58.095 } 00:12:58.095 Got JSON-RPC error response 00:12:58.095 response: 00:12:58.095 { 00:12:58.095 "code": -32602, 00:12:58.095 "message": "Invalid cntlid range [1-0]" 00:12:58.095 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:58.095 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24499 -I 65520 00:12:58.095 [2024-12-10 03:59:57.315834] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24499: invalid cntlid range [1-65520] 00:12:58.095 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:58.095 { 00:12:58.095 "nqn": "nqn.2016-06.io.spdk:cnode24499", 00:12:58.095 "max_cntlid": 65520, 00:12:58.095 "method": "nvmf_create_subsystem", 00:12:58.095 "req_id": 1 00:12:58.095 } 00:12:58.095 Got JSON-RPC error response 00:12:58.095 response: 00:12:58.095 { 00:12:58.095 "code": -32602, 00:12:58.095 "message": "Invalid cntlid range [1-65520]" 00:12:58.095 }' 00:12:58.095 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:58.095 { 00:12:58.095 "nqn": "nqn.2016-06.io.spdk:cnode24499", 00:12:58.095 "max_cntlid": 65520, 00:12:58.095 "method": "nvmf_create_subsystem", 00:12:58.095 "req_id": 1 00:12:58.095 } 00:12:58.095 Got JSON-RPC error response 00:12:58.095 response: 00:12:58.095 { 00:12:58.095 "code": -32602, 00:12:58.095 "message": "Invalid cntlid range [1-65520]" 00:12:58.095 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:58.095 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4847 -i 6 -I 5 00:12:58.354 [2024-12-10 03:59:57.516509] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4847: invalid cntlid range [6-5] 00:12:58.354 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:58.354 { 00:12:58.354 "nqn": "nqn.2016-06.io.spdk:cnode4847", 00:12:58.354 "min_cntlid": 6, 00:12:58.354 "max_cntlid": 5, 00:12:58.354 "method": "nvmf_create_subsystem", 00:12:58.354 "req_id": 1 00:12:58.354 } 00:12:58.354 Got JSON-RPC error response 00:12:58.354 response: 00:12:58.354 { 00:12:58.354 "code": -32602, 00:12:58.354 "message": "Invalid cntlid range [6-5]" 00:12:58.354 }' 00:12:58.354 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:58.354 { 00:12:58.354 "nqn": "nqn.2016-06.io.spdk:cnode4847", 00:12:58.354 "min_cntlid": 6, 00:12:58.354 "max_cntlid": 5, 00:12:58.354 "method": "nvmf_create_subsystem", 00:12:58.354 "req_id": 1 00:12:58.354 } 00:12:58.354 Got JSON-RPC error response 00:12:58.354 response: 00:12:58.354 { 00:12:58.354 "code": -32602, 00:12:58.354 "message": "Invalid cntlid range [6-5]" 00:12:58.354 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:58.354 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:58.612 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:58.612 { 00:12:58.612 "name": "foobar", 00:12:58.612 "method": "nvmf_delete_target", 00:12:58.612 "req_id": 1 00:12:58.612 } 00:12:58.612 Got JSON-RPC error response 00:12:58.612 response: 00:12:58.612 { 00:12:58.612 "code": -32602, 00:12:58.612 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:58.612 }' 00:12:58.612 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:58.612 { 00:12:58.612 "name": "foobar", 00:12:58.612 "method": "nvmf_delete_target", 00:12:58.612 "req_id": 1 00:12:58.612 } 00:12:58.613 Got JSON-RPC error response 00:12:58.613 response: 00:12:58.613 { 00:12:58.613 "code": -32602, 00:12:58.613 "message": "The specified target doesn't exist, cannot delete it." 00:12:58.613 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:58.613 rmmod nvme_tcp 00:12:58.613 rmmod nvme_fabrics 00:12:58.613 rmmod nvme_keyring 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 4191969 ']' 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 4191969 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 4191969 ']' 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 4191969 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4191969 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4191969' 00:12:58.613 killing process with pid 4191969 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 4191969 00:12:58.613 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 4191969 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.871 03:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.810 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:00.810 00:13:00.810 real 0m12.544s 00:13:00.810 user 0m20.974s 00:13:00.810 sys 0m5.372s 00:13:00.810 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.810 03:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.810 ************************************ 00:13:00.810 END TEST nvmf_invalid 00:13:00.810 ************************************ 00:13:00.810 04:00:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:00.810 04:00:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:00.810 04:00:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.810 04:00:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.810 ************************************ 00:13:00.810 START TEST nvmf_connect_stress 00:13:00.810 ************************************ 00:13:00.810 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:01.070 * Looking for test storage... 00:13:01.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:01.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.070 --rc genhtml_branch_coverage=1 00:13:01.070 --rc genhtml_function_coverage=1 00:13:01.070 --rc genhtml_legend=1 00:13:01.070 --rc geninfo_all_blocks=1 00:13:01.070 --rc geninfo_unexecuted_blocks=1 00:13:01.070 00:13:01.070 ' 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:01.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.070 --rc genhtml_branch_coverage=1 00:13:01.070 --rc genhtml_function_coverage=1 00:13:01.070 --rc genhtml_legend=1 00:13:01.070 --rc geninfo_all_blocks=1 00:13:01.070 --rc geninfo_unexecuted_blocks=1 00:13:01.070 00:13:01.070 ' 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:01.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.070 --rc genhtml_branch_coverage=1 00:13:01.070 --rc genhtml_function_coverage=1 00:13:01.070 --rc genhtml_legend=1 00:13:01.070 --rc geninfo_all_blocks=1 00:13:01.070 --rc geninfo_unexecuted_blocks=1 00:13:01.070 00:13:01.070 ' 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:01.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.070 --rc genhtml_branch_coverage=1 00:13:01.070 --rc genhtml_function_coverage=1 00:13:01.070 --rc genhtml_legend=1 00:13:01.070 --rc geninfo_all_blocks=1 00:13:01.070 --rc geninfo_unexecuted_blocks=1 00:13:01.070 00:13:01.070 ' 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.070 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:01.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:01.071 04:00:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:07.733 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:07.733 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:07.733 Found net devices under 0000:af:00.0: cvl_0_0 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.733 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:07.734 Found net devices under 0000:af:00.1: cvl_0_1 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.734 04:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:07.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:13:07.734 00:13:07.734 --- 10.0.0.2 ping statistics --- 00:13:07.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.734 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:13:07.734 00:13:07.734 --- 10.0.0.1 ping statistics --- 00:13:07.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.734 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2915 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2915 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2915 ']' 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.734 [2024-12-10 04:00:06.251474] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:07.734 [2024-12-10 04:00:06.251523] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.734 [2024-12-10 04:00:06.333870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:07.734 [2024-12-10 04:00:06.373296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.734 [2024-12-10 04:00:06.373347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.734 [2024-12-10 04:00:06.373358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.734 [2024-12-10 04:00:06.373364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.734 [2024-12-10 04:00:06.373370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.734 [2024-12-10 04:00:06.374722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.734 [2024-12-10 04:00:06.374828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.734 [2024-12-10 04:00:06.374829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.734 [2024-12-10 04:00:06.511388] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.734 [2024-12-10 04:00:06.531622] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.734 NULL1 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3060 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:07.734 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.735 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.302 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.302 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:08.302 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.302 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.302 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.561 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.561 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:08.561 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.561 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.561 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.821 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.821 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:08.821 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.821 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.821 04:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.079 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.079 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:09.079 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.079 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.079 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.338 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.338 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:09.338 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.338 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.338 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.905 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.905 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:09.905 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.906 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.906 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.164 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.164 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:10.164 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.164 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.164 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.423 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.423 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:10.423 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.423 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.423 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.681 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.681 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:10.681 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.681 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.681 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.939 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.939 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:10.939 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.939 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.939 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.506 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.506 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:11.506 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.506 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.506 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.765 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.765 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:11.765 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.765 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.765 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.024 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.024 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:12.024 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.024 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.024 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.283 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.283 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:12.283 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.283 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.283 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.850 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.850 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:12.850 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.850 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.850 04:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.109 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.109 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:13.109 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.109 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.109 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.367 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.367 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:13.368 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.368 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.368 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.627 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.627 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:13.627 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.627 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.627 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.885 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.885 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:13.885 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.885 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.885 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.453 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.453 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:14.453 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.453 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.453 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.712 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.712 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:14.712 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.712 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.712 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.970 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.970 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:14.970 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.970 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.970 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.229 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.229 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:15.229 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.229 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.229 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.796 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.796 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:15.796 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.796 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.796 04:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.054 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.054 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:16.054 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.054 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.054 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.313 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.313 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:16.313 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.313 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.313 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.571 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.571 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:16.571 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.571 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.571 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.830 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.830 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:16.830 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.830 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.831 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.398 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.398 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:17.398 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.398 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.398 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.657 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3060 00:13:17.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3060) - No such process 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3060 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.657 rmmod nvme_tcp 00:13:17.657 rmmod nvme_fabrics 00:13:17.657 rmmod nvme_keyring 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2915 ']' 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2915 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2915 ']' 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2915 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2915 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2915' 00:13:17.657 killing process with pid 2915 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2915 00:13:17.657 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2915 00:13:17.916 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.916 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.916 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.916 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:17.916 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:17.916 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.916 04:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.916 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.916 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:17.916 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.916 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.916 04:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.822 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.822 00:13:19.822 real 0m19.003s 00:13:19.822 user 0m39.417s 00:13:19.822 sys 0m8.539s 00:13:19.822 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.822 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.822 ************************************ 00:13:19.822 END TEST nvmf_connect_stress 00:13:19.822 ************************************ 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.082 ************************************ 00:13:20.082 START TEST nvmf_fused_ordering 00:13:20.082 ************************************ 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:20.082 * Looking for test storage... 00:13:20.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:20.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.082 --rc genhtml_branch_coverage=1 00:13:20.082 --rc genhtml_function_coverage=1 00:13:20.082 --rc genhtml_legend=1 00:13:20.082 --rc geninfo_all_blocks=1 00:13:20.082 --rc geninfo_unexecuted_blocks=1 00:13:20.082 00:13:20.082 ' 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:20.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.082 --rc genhtml_branch_coverage=1 00:13:20.082 --rc genhtml_function_coverage=1 00:13:20.082 --rc genhtml_legend=1 00:13:20.082 --rc geninfo_all_blocks=1 00:13:20.082 --rc geninfo_unexecuted_blocks=1 00:13:20.082 00:13:20.082 ' 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:20.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.082 --rc genhtml_branch_coverage=1 00:13:20.082 --rc genhtml_function_coverage=1 00:13:20.082 --rc genhtml_legend=1 00:13:20.082 --rc geninfo_all_blocks=1 00:13:20.082 --rc geninfo_unexecuted_blocks=1 00:13:20.082 00:13:20.082 ' 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:20.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.082 --rc genhtml_branch_coverage=1 00:13:20.082 --rc genhtml_function_coverage=1 00:13:20.082 --rc genhtml_legend=1 00:13:20.082 --rc geninfo_all_blocks=1 00:13:20.082 --rc geninfo_unexecuted_blocks=1 00:13:20.082 00:13:20.082 ' 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.082 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:20.083 04:00:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:26.651 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:26.651 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:26.651 Found net devices under 0000:af:00.0: cvl_0_0 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:26.651 Found net devices under 0000:af:00.1: cvl_0_1 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:26.651 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:26.652 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.652 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.652 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:26.652 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:26.652 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.652 04:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:26.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:13:26.652 00:13:26.652 --- 10.0.0.2 ping statistics --- 00:13:26.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.652 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:13:26.652 00:13:26.652 --- 10.0.0.1 ping statistics --- 00:13:26.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.652 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=8614 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 8614 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 8614 ']' 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.652 [2024-12-10 04:00:25.313818] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:26.652 [2024-12-10 04:00:25.313868] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.652 [2024-12-10 04:00:25.393417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.652 [2024-12-10 04:00:25.432563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.652 [2024-12-10 04:00:25.432598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.652 [2024-12-10 04:00:25.432605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.652 [2024-12-10 04:00:25.432611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.652 [2024-12-10 04:00:25.432616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.652 [2024-12-10 04:00:25.433091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.652 [2024-12-10 04:00:25.568307] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.652 [2024-12-10 04:00:25.588498] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.652 NULL1 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.652 04:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:26.652 [2024-12-10 04:00:25.648020] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:26.652 [2024-12-10 04:00:25.648066] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid8634 ] 00:13:26.911 Attached to nqn.2016-06.io.spdk:cnode1 00:13:26.911 Namespace ID: 1 size: 1GB 00:13:26.911 fused_ordering(0) 00:13:26.911 fused_ordering(1) 00:13:26.912 fused_ordering(2) 00:13:26.912 fused_ordering(3) 00:13:26.912 fused_ordering(4) 00:13:26.912 fused_ordering(5) 00:13:26.912 fused_ordering(6) 00:13:26.912 fused_ordering(7) 00:13:26.912 fused_ordering(8) 00:13:26.912 fused_ordering(9) 00:13:26.912 fused_ordering(10) 00:13:26.912 fused_ordering(11) 00:13:26.912 fused_ordering(12) 00:13:26.912 fused_ordering(13) 00:13:26.912 fused_ordering(14) 00:13:26.912 fused_ordering(15) 00:13:26.912 fused_ordering(16) 00:13:26.912 fused_ordering(17) 00:13:26.912 fused_ordering(18) 00:13:26.912 fused_ordering(19) 00:13:26.912 fused_ordering(20) 00:13:26.912 fused_ordering(21) 00:13:26.912 fused_ordering(22) 00:13:26.912 fused_ordering(23) 00:13:26.912 fused_ordering(24) 00:13:26.912 fused_ordering(25) 00:13:26.912 fused_ordering(26) 00:13:26.912 fused_ordering(27) 00:13:26.912 fused_ordering(28) 00:13:26.912 fused_ordering(29) 00:13:26.912 fused_ordering(30) 00:13:26.912 fused_ordering(31) 00:13:26.912 fused_ordering(32) 00:13:26.912 fused_ordering(33) 00:13:26.912 fused_ordering(34) 00:13:26.912 fused_ordering(35) 00:13:26.912 fused_ordering(36) 00:13:26.912 fused_ordering(37) 00:13:26.912 fused_ordering(38) 00:13:26.912 fused_ordering(39) 00:13:26.912 fused_ordering(40) 00:13:26.912 fused_ordering(41) 00:13:26.912 fused_ordering(42) 00:13:26.912 fused_ordering(43) 00:13:26.912 fused_ordering(44) 00:13:26.912 fused_ordering(45) 00:13:26.912 fused_ordering(46) 00:13:26.912 fused_ordering(47) 00:13:26.912 fused_ordering(48) 00:13:26.912 fused_ordering(49) 00:13:26.912 fused_ordering(50) 00:13:26.912 fused_ordering(51) 00:13:26.912 fused_ordering(52) 00:13:26.912 fused_ordering(53) 00:13:26.912 fused_ordering(54) 00:13:26.912 fused_ordering(55) 00:13:26.912 fused_ordering(56) 00:13:26.912 fused_ordering(57) 00:13:26.912 fused_ordering(58) 00:13:26.912 fused_ordering(59) 00:13:26.912 fused_ordering(60) 00:13:26.912 fused_ordering(61) 00:13:26.912 fused_ordering(62) 00:13:26.912 fused_ordering(63) 00:13:26.912 fused_ordering(64) 00:13:26.912 fused_ordering(65) 00:13:26.912 fused_ordering(66) 00:13:26.912 fused_ordering(67) 00:13:26.912 fused_ordering(68) 00:13:26.912 fused_ordering(69) 00:13:26.912 fused_ordering(70) 00:13:26.912 fused_ordering(71) 00:13:26.912 fused_ordering(72) 00:13:26.912 fused_ordering(73) 00:13:26.912 fused_ordering(74) 00:13:26.912 fused_ordering(75) 00:13:26.912 fused_ordering(76) 00:13:26.912 fused_ordering(77) 00:13:26.912 fused_ordering(78) 00:13:26.912 fused_ordering(79) 00:13:26.912 fused_ordering(80) 00:13:26.912 fused_ordering(81) 00:13:26.912 fused_ordering(82) 00:13:26.912 fused_ordering(83) 00:13:26.912 fused_ordering(84) 00:13:26.912 fused_ordering(85) 00:13:26.912 fused_ordering(86) 00:13:26.912 fused_ordering(87) 00:13:26.912 fused_ordering(88) 00:13:26.912 fused_ordering(89) 00:13:26.912 fused_ordering(90) 00:13:26.912 fused_ordering(91) 00:13:26.912 fused_ordering(92) 00:13:26.912 fused_ordering(93) 00:13:26.912 fused_ordering(94) 00:13:26.912 fused_ordering(95) 00:13:26.912 fused_ordering(96) 00:13:26.912 fused_ordering(97) 00:13:26.912 fused_ordering(98) 00:13:26.912 fused_ordering(99) 00:13:26.912 fused_ordering(100) 00:13:26.912 fused_ordering(101) 00:13:26.912 fused_ordering(102) 00:13:26.912 fused_ordering(103) 00:13:26.912 fused_ordering(104) 00:13:26.912 fused_ordering(105) 00:13:26.912 fused_ordering(106) 00:13:26.912 fused_ordering(107) 00:13:26.912 fused_ordering(108) 00:13:26.912 fused_ordering(109) 00:13:26.912 fused_ordering(110) 00:13:26.912 fused_ordering(111) 00:13:26.912 fused_ordering(112) 00:13:26.912 fused_ordering(113) 00:13:26.912 fused_ordering(114) 00:13:26.912 fused_ordering(115) 00:13:26.912 fused_ordering(116) 00:13:26.912 fused_ordering(117) 00:13:26.912 fused_ordering(118) 00:13:26.912 fused_ordering(119) 00:13:26.912 fused_ordering(120) 00:13:26.912 fused_ordering(121) 00:13:26.912 fused_ordering(122) 00:13:26.912 fused_ordering(123) 00:13:26.912 fused_ordering(124) 00:13:26.912 fused_ordering(125) 00:13:26.912 fused_ordering(126) 00:13:26.912 fused_ordering(127) 00:13:26.912 fused_ordering(128) 00:13:26.912 fused_ordering(129) 00:13:26.912 fused_ordering(130) 00:13:26.912 fused_ordering(131) 00:13:26.912 fused_ordering(132) 00:13:26.912 fused_ordering(133) 00:13:26.912 fused_ordering(134) 00:13:26.912 fused_ordering(135) 00:13:26.912 fused_ordering(136) 00:13:26.912 fused_ordering(137) 00:13:26.912 fused_ordering(138) 00:13:26.912 fused_ordering(139) 00:13:26.912 fused_ordering(140) 00:13:26.912 fused_ordering(141) 00:13:26.912 fused_ordering(142) 00:13:26.912 fused_ordering(143) 00:13:26.912 fused_ordering(144) 00:13:26.912 fused_ordering(145) 00:13:26.912 fused_ordering(146) 00:13:26.912 fused_ordering(147) 00:13:26.912 fused_ordering(148) 00:13:26.912 fused_ordering(149) 00:13:26.912 fused_ordering(150) 00:13:26.912 fused_ordering(151) 00:13:26.912 fused_ordering(152) 00:13:26.912 fused_ordering(153) 00:13:26.912 fused_ordering(154) 00:13:26.912 fused_ordering(155) 00:13:26.912 fused_ordering(156) 00:13:26.912 fused_ordering(157) 00:13:26.912 fused_ordering(158) 00:13:26.912 fused_ordering(159) 00:13:26.912 fused_ordering(160) 00:13:26.912 fused_ordering(161) 00:13:26.912 fused_ordering(162) 00:13:26.912 fused_ordering(163) 00:13:26.912 fused_ordering(164) 00:13:26.912 fused_ordering(165) 00:13:26.912 fused_ordering(166) 00:13:26.912 fused_ordering(167) 00:13:26.912 fused_ordering(168) 00:13:26.912 fused_ordering(169) 00:13:26.912 fused_ordering(170) 00:13:26.912 fused_ordering(171) 00:13:26.912 fused_ordering(172) 00:13:26.912 fused_ordering(173) 00:13:26.912 fused_ordering(174) 00:13:26.912 fused_ordering(175) 00:13:26.912 fused_ordering(176) 00:13:26.912 fused_ordering(177) 00:13:26.912 fused_ordering(178) 00:13:26.912 fused_ordering(179) 00:13:26.912 fused_ordering(180) 00:13:26.912 fused_ordering(181) 00:13:26.912 fused_ordering(182) 00:13:26.912 fused_ordering(183) 00:13:26.912 fused_ordering(184) 00:13:26.912 fused_ordering(185) 00:13:26.912 fused_ordering(186) 00:13:26.912 fused_ordering(187) 00:13:26.912 fused_ordering(188) 00:13:26.912 fused_ordering(189) 00:13:26.912 fused_ordering(190) 00:13:26.912 fused_ordering(191) 00:13:26.912 fused_ordering(192) 00:13:26.912 fused_ordering(193) 00:13:26.912 fused_ordering(194) 00:13:26.912 fused_ordering(195) 00:13:26.912 fused_ordering(196) 00:13:26.912 fused_ordering(197) 00:13:26.912 fused_ordering(198) 00:13:26.912 fused_ordering(199) 00:13:26.912 fused_ordering(200) 00:13:26.912 fused_ordering(201) 00:13:26.912 fused_ordering(202) 00:13:26.912 fused_ordering(203) 00:13:26.912 fused_ordering(204) 00:13:26.912 fused_ordering(205) 00:13:27.172 fused_ordering(206) 00:13:27.172 fused_ordering(207) 00:13:27.172 fused_ordering(208) 00:13:27.172 fused_ordering(209) 00:13:27.172 fused_ordering(210) 00:13:27.172 fused_ordering(211) 00:13:27.172 fused_ordering(212) 00:13:27.172 fused_ordering(213) 00:13:27.172 fused_ordering(214) 00:13:27.172 fused_ordering(215) 00:13:27.172 fused_ordering(216) 00:13:27.172 fused_ordering(217) 00:13:27.172 fused_ordering(218) 00:13:27.172 fused_ordering(219) 00:13:27.172 fused_ordering(220) 00:13:27.172 fused_ordering(221) 00:13:27.172 fused_ordering(222) 00:13:27.172 fused_ordering(223) 00:13:27.172 fused_ordering(224) 00:13:27.172 fused_ordering(225) 00:13:27.172 fused_ordering(226) 00:13:27.172 fused_ordering(227) 00:13:27.172 fused_ordering(228) 00:13:27.172 fused_ordering(229) 00:13:27.172 fused_ordering(230) 00:13:27.172 fused_ordering(231) 00:13:27.172 fused_ordering(232) 00:13:27.172 fused_ordering(233) 00:13:27.172 fused_ordering(234) 00:13:27.172 fused_ordering(235) 00:13:27.172 fused_ordering(236) 00:13:27.172 fused_ordering(237) 00:13:27.172 fused_ordering(238) 00:13:27.172 fused_ordering(239) 00:13:27.172 fused_ordering(240) 00:13:27.172 fused_ordering(241) 00:13:27.172 fused_ordering(242) 00:13:27.172 fused_ordering(243) 00:13:27.172 fused_ordering(244) 00:13:27.172 fused_ordering(245) 00:13:27.172 fused_ordering(246) 00:13:27.172 fused_ordering(247) 00:13:27.172 fused_ordering(248) 00:13:27.172 fused_ordering(249) 00:13:27.172 fused_ordering(250) 00:13:27.172 fused_ordering(251) 00:13:27.172 fused_ordering(252) 00:13:27.172 fused_ordering(253) 00:13:27.172 fused_ordering(254) 00:13:27.172 fused_ordering(255) 00:13:27.172 fused_ordering(256) 00:13:27.172 fused_ordering(257) 00:13:27.172 fused_ordering(258) 00:13:27.172 fused_ordering(259) 00:13:27.172 fused_ordering(260) 00:13:27.172 fused_ordering(261) 00:13:27.172 fused_ordering(262) 00:13:27.172 fused_ordering(263) 00:13:27.172 fused_ordering(264) 00:13:27.172 fused_ordering(265) 00:13:27.172 fused_ordering(266) 00:13:27.172 fused_ordering(267) 00:13:27.172 fused_ordering(268) 00:13:27.172 fused_ordering(269) 00:13:27.172 fused_ordering(270) 00:13:27.172 fused_ordering(271) 00:13:27.172 fused_ordering(272) 00:13:27.172 fused_ordering(273) 00:13:27.172 fused_ordering(274) 00:13:27.172 fused_ordering(275) 00:13:27.172 fused_ordering(276) 00:13:27.172 fused_ordering(277) 00:13:27.172 fused_ordering(278) 00:13:27.172 fused_ordering(279) 00:13:27.172 fused_ordering(280) 00:13:27.172 fused_ordering(281) 00:13:27.172 fused_ordering(282) 00:13:27.172 fused_ordering(283) 00:13:27.172 fused_ordering(284) 00:13:27.172 fused_ordering(285) 00:13:27.172 fused_ordering(286) 00:13:27.172 fused_ordering(287) 00:13:27.172 fused_ordering(288) 00:13:27.172 fused_ordering(289) 00:13:27.172 fused_ordering(290) 00:13:27.172 fused_ordering(291) 00:13:27.172 fused_ordering(292) 00:13:27.172 fused_ordering(293) 00:13:27.172 fused_ordering(294) 00:13:27.172 fused_ordering(295) 00:13:27.172 fused_ordering(296) 00:13:27.172 fused_ordering(297) 00:13:27.172 fused_ordering(298) 00:13:27.172 fused_ordering(299) 00:13:27.172 fused_ordering(300) 00:13:27.172 fused_ordering(301) 00:13:27.172 fused_ordering(302) 00:13:27.172 fused_ordering(303) 00:13:27.172 fused_ordering(304) 00:13:27.172 fused_ordering(305) 00:13:27.172 fused_ordering(306) 00:13:27.172 fused_ordering(307) 00:13:27.172 fused_ordering(308) 00:13:27.172 fused_ordering(309) 00:13:27.172 fused_ordering(310) 00:13:27.172 fused_ordering(311) 00:13:27.172 fused_ordering(312) 00:13:27.172 fused_ordering(313) 00:13:27.172 fused_ordering(314) 00:13:27.172 fused_ordering(315) 00:13:27.172 fused_ordering(316) 00:13:27.172 fused_ordering(317) 00:13:27.172 fused_ordering(318) 00:13:27.172 fused_ordering(319) 00:13:27.172 fused_ordering(320) 00:13:27.172 fused_ordering(321) 00:13:27.172 fused_ordering(322) 00:13:27.172 fused_ordering(323) 00:13:27.172 fused_ordering(324) 00:13:27.172 fused_ordering(325) 00:13:27.172 fused_ordering(326) 00:13:27.172 fused_ordering(327) 00:13:27.172 fused_ordering(328) 00:13:27.172 fused_ordering(329) 00:13:27.172 fused_ordering(330) 00:13:27.172 fused_ordering(331) 00:13:27.172 fused_ordering(332) 00:13:27.172 fused_ordering(333) 00:13:27.172 fused_ordering(334) 00:13:27.172 fused_ordering(335) 00:13:27.172 fused_ordering(336) 00:13:27.172 fused_ordering(337) 00:13:27.172 fused_ordering(338) 00:13:27.172 fused_ordering(339) 00:13:27.172 fused_ordering(340) 00:13:27.172 fused_ordering(341) 00:13:27.172 fused_ordering(342) 00:13:27.172 fused_ordering(343) 00:13:27.172 fused_ordering(344) 00:13:27.172 fused_ordering(345) 00:13:27.172 fused_ordering(346) 00:13:27.172 fused_ordering(347) 00:13:27.172 fused_ordering(348) 00:13:27.172 fused_ordering(349) 00:13:27.172 fused_ordering(350) 00:13:27.172 fused_ordering(351) 00:13:27.172 fused_ordering(352) 00:13:27.172 fused_ordering(353) 00:13:27.172 fused_ordering(354) 00:13:27.172 fused_ordering(355) 00:13:27.172 fused_ordering(356) 00:13:27.172 fused_ordering(357) 00:13:27.172 fused_ordering(358) 00:13:27.172 fused_ordering(359) 00:13:27.172 fused_ordering(360) 00:13:27.172 fused_ordering(361) 00:13:27.172 fused_ordering(362) 00:13:27.172 fused_ordering(363) 00:13:27.172 fused_ordering(364) 00:13:27.172 fused_ordering(365) 00:13:27.172 fused_ordering(366) 00:13:27.172 fused_ordering(367) 00:13:27.172 fused_ordering(368) 00:13:27.172 fused_ordering(369) 00:13:27.172 fused_ordering(370) 00:13:27.172 fused_ordering(371) 00:13:27.172 fused_ordering(372) 00:13:27.172 fused_ordering(373) 00:13:27.172 fused_ordering(374) 00:13:27.172 fused_ordering(375) 00:13:27.172 fused_ordering(376) 00:13:27.172 fused_ordering(377) 00:13:27.172 fused_ordering(378) 00:13:27.172 fused_ordering(379) 00:13:27.172 fused_ordering(380) 00:13:27.172 fused_ordering(381) 00:13:27.172 fused_ordering(382) 00:13:27.172 fused_ordering(383) 00:13:27.172 fused_ordering(384) 00:13:27.172 fused_ordering(385) 00:13:27.172 fused_ordering(386) 00:13:27.172 fused_ordering(387) 00:13:27.172 fused_ordering(388) 00:13:27.172 fused_ordering(389) 00:13:27.172 fused_ordering(390) 00:13:27.172 fused_ordering(391) 00:13:27.172 fused_ordering(392) 00:13:27.172 fused_ordering(393) 00:13:27.172 fused_ordering(394) 00:13:27.172 fused_ordering(395) 00:13:27.172 fused_ordering(396) 00:13:27.172 fused_ordering(397) 00:13:27.172 fused_ordering(398) 00:13:27.172 fused_ordering(399) 00:13:27.172 fused_ordering(400) 00:13:27.172 fused_ordering(401) 00:13:27.172 fused_ordering(402) 00:13:27.172 fused_ordering(403) 00:13:27.172 fused_ordering(404) 00:13:27.172 fused_ordering(405) 00:13:27.172 fused_ordering(406) 00:13:27.172 fused_ordering(407) 00:13:27.172 fused_ordering(408) 00:13:27.172 fused_ordering(409) 00:13:27.172 fused_ordering(410) 00:13:27.431 fused_ordering(411) 00:13:27.431 fused_ordering(412) 00:13:27.431 fused_ordering(413) 00:13:27.431 fused_ordering(414) 00:13:27.431 fused_ordering(415) 00:13:27.431 fused_ordering(416) 00:13:27.431 fused_ordering(417) 00:13:27.431 fused_ordering(418) 00:13:27.431 fused_ordering(419) 00:13:27.431 fused_ordering(420) 00:13:27.431 fused_ordering(421) 00:13:27.431 fused_ordering(422) 00:13:27.431 fused_ordering(423) 00:13:27.431 fused_ordering(424) 00:13:27.431 fused_ordering(425) 00:13:27.431 fused_ordering(426) 00:13:27.431 fused_ordering(427) 00:13:27.431 fused_ordering(428) 00:13:27.431 fused_ordering(429) 00:13:27.431 fused_ordering(430) 00:13:27.431 fused_ordering(431) 00:13:27.431 fused_ordering(432) 00:13:27.431 fused_ordering(433) 00:13:27.431 fused_ordering(434) 00:13:27.431 fused_ordering(435) 00:13:27.431 fused_ordering(436) 00:13:27.431 fused_ordering(437) 00:13:27.431 fused_ordering(438) 00:13:27.431 fused_ordering(439) 00:13:27.431 fused_ordering(440) 00:13:27.431 fused_ordering(441) 00:13:27.431 fused_ordering(442) 00:13:27.431 fused_ordering(443) 00:13:27.431 fused_ordering(444) 00:13:27.431 fused_ordering(445) 00:13:27.431 fused_ordering(446) 00:13:27.431 fused_ordering(447) 00:13:27.431 fused_ordering(448) 00:13:27.431 fused_ordering(449) 00:13:27.431 fused_ordering(450) 00:13:27.431 fused_ordering(451) 00:13:27.431 fused_ordering(452) 00:13:27.431 fused_ordering(453) 00:13:27.431 fused_ordering(454) 00:13:27.431 fused_ordering(455) 00:13:27.431 fused_ordering(456) 00:13:27.431 fused_ordering(457) 00:13:27.431 fused_ordering(458) 00:13:27.431 fused_ordering(459) 00:13:27.431 fused_ordering(460) 00:13:27.431 fused_ordering(461) 00:13:27.431 fused_ordering(462) 00:13:27.431 fused_ordering(463) 00:13:27.431 fused_ordering(464) 00:13:27.431 fused_ordering(465) 00:13:27.431 fused_ordering(466) 00:13:27.431 fused_ordering(467) 00:13:27.431 fused_ordering(468) 00:13:27.431 fused_ordering(469) 00:13:27.431 fused_ordering(470) 00:13:27.431 fused_ordering(471) 00:13:27.431 fused_ordering(472) 00:13:27.431 fused_ordering(473) 00:13:27.431 fused_ordering(474) 00:13:27.431 fused_ordering(475) 00:13:27.431 fused_ordering(476) 00:13:27.431 fused_ordering(477) 00:13:27.431 fused_ordering(478) 00:13:27.431 fused_ordering(479) 00:13:27.431 fused_ordering(480) 00:13:27.431 fused_ordering(481) 00:13:27.431 fused_ordering(482) 00:13:27.431 fused_ordering(483) 00:13:27.431 fused_ordering(484) 00:13:27.431 fused_ordering(485) 00:13:27.431 fused_ordering(486) 00:13:27.431 fused_ordering(487) 00:13:27.431 fused_ordering(488) 00:13:27.431 fused_ordering(489) 00:13:27.431 fused_ordering(490) 00:13:27.431 fused_ordering(491) 00:13:27.431 fused_ordering(492) 00:13:27.431 fused_ordering(493) 00:13:27.431 fused_ordering(494) 00:13:27.431 fused_ordering(495) 00:13:27.431 fused_ordering(496) 00:13:27.431 fused_ordering(497) 00:13:27.431 fused_ordering(498) 00:13:27.431 fused_ordering(499) 00:13:27.431 fused_ordering(500) 00:13:27.431 fused_ordering(501) 00:13:27.431 fused_ordering(502) 00:13:27.431 fused_ordering(503) 00:13:27.431 fused_ordering(504) 00:13:27.431 fused_ordering(505) 00:13:27.431 fused_ordering(506) 00:13:27.431 fused_ordering(507) 00:13:27.431 fused_ordering(508) 00:13:27.431 fused_ordering(509) 00:13:27.431 fused_ordering(510) 00:13:27.431 fused_ordering(511) 00:13:27.431 fused_ordering(512) 00:13:27.431 fused_ordering(513) 00:13:27.431 fused_ordering(514) 00:13:27.431 fused_ordering(515) 00:13:27.431 fused_ordering(516) 00:13:27.431 fused_ordering(517) 00:13:27.431 fused_ordering(518) 00:13:27.431 fused_ordering(519) 00:13:27.431 fused_ordering(520) 00:13:27.431 fused_ordering(521) 00:13:27.431 fused_ordering(522) 00:13:27.431 fused_ordering(523) 00:13:27.431 fused_ordering(524) 00:13:27.431 fused_ordering(525) 00:13:27.431 fused_ordering(526) 00:13:27.431 fused_ordering(527) 00:13:27.431 fused_ordering(528) 00:13:27.431 fused_ordering(529) 00:13:27.431 fused_ordering(530) 00:13:27.431 fused_ordering(531) 00:13:27.431 fused_ordering(532) 00:13:27.431 fused_ordering(533) 00:13:27.431 fused_ordering(534) 00:13:27.431 fused_ordering(535) 00:13:27.431 fused_ordering(536) 00:13:27.431 fused_ordering(537) 00:13:27.431 fused_ordering(538) 00:13:27.431 fused_ordering(539) 00:13:27.431 fused_ordering(540) 00:13:27.431 fused_ordering(541) 00:13:27.431 fused_ordering(542) 00:13:27.431 fused_ordering(543) 00:13:27.431 fused_ordering(544) 00:13:27.431 fused_ordering(545) 00:13:27.431 fused_ordering(546) 00:13:27.431 fused_ordering(547) 00:13:27.431 fused_ordering(548) 00:13:27.431 fused_ordering(549) 00:13:27.431 fused_ordering(550) 00:13:27.431 fused_ordering(551) 00:13:27.431 fused_ordering(552) 00:13:27.431 fused_ordering(553) 00:13:27.431 fused_ordering(554) 00:13:27.431 fused_ordering(555) 00:13:27.431 fused_ordering(556) 00:13:27.431 fused_ordering(557) 00:13:27.431 fused_ordering(558) 00:13:27.432 fused_ordering(559) 00:13:27.432 fused_ordering(560) 00:13:27.432 fused_ordering(561) 00:13:27.432 fused_ordering(562) 00:13:27.432 fused_ordering(563) 00:13:27.432 fused_ordering(564) 00:13:27.432 fused_ordering(565) 00:13:27.432 fused_ordering(566) 00:13:27.432 fused_ordering(567) 00:13:27.432 fused_ordering(568) 00:13:27.432 fused_ordering(569) 00:13:27.432 fused_ordering(570) 00:13:27.432 fused_ordering(571) 00:13:27.432 fused_ordering(572) 00:13:27.432 fused_ordering(573) 00:13:27.432 fused_ordering(574) 00:13:27.432 fused_ordering(575) 00:13:27.432 fused_ordering(576) 00:13:27.432 fused_ordering(577) 00:13:27.432 fused_ordering(578) 00:13:27.432 fused_ordering(579) 00:13:27.432 fused_ordering(580) 00:13:27.432 fused_ordering(581) 00:13:27.432 fused_ordering(582) 00:13:27.432 fused_ordering(583) 00:13:27.432 fused_ordering(584) 00:13:27.432 fused_ordering(585) 00:13:27.432 fused_ordering(586) 00:13:27.432 fused_ordering(587) 00:13:27.432 fused_ordering(588) 00:13:27.432 fused_ordering(589) 00:13:27.432 fused_ordering(590) 00:13:27.432 fused_ordering(591) 00:13:27.432 fused_ordering(592) 00:13:27.432 fused_ordering(593) 00:13:27.432 fused_ordering(594) 00:13:27.432 fused_ordering(595) 00:13:27.432 fused_ordering(596) 00:13:27.432 fused_ordering(597) 00:13:27.432 fused_ordering(598) 00:13:27.432 fused_ordering(599) 00:13:27.432 fused_ordering(600) 00:13:27.432 fused_ordering(601) 00:13:27.432 fused_ordering(602) 00:13:27.432 fused_ordering(603) 00:13:27.432 fused_ordering(604) 00:13:27.432 fused_ordering(605) 00:13:27.432 fused_ordering(606) 00:13:27.432 fused_ordering(607) 00:13:27.432 fused_ordering(608) 00:13:27.432 fused_ordering(609) 00:13:27.432 fused_ordering(610) 00:13:27.432 fused_ordering(611) 00:13:27.432 fused_ordering(612) 00:13:27.432 fused_ordering(613) 00:13:27.432 fused_ordering(614) 00:13:27.432 fused_ordering(615) 00:13:27.999 fused_ordering(616) 00:13:27.999 fused_ordering(617) 00:13:27.999 fused_ordering(618) 00:13:27.999 fused_ordering(619) 00:13:27.999 fused_ordering(620) 00:13:27.999 fused_ordering(621) 00:13:27.999 fused_ordering(622) 00:13:27.999 fused_ordering(623) 00:13:27.999 fused_ordering(624) 00:13:27.999 fused_ordering(625) 00:13:27.999 fused_ordering(626) 00:13:27.999 fused_ordering(627) 00:13:27.999 fused_ordering(628) 00:13:27.999 fused_ordering(629) 00:13:27.999 fused_ordering(630) 00:13:27.999 fused_ordering(631) 00:13:27.999 fused_ordering(632) 00:13:27.999 fused_ordering(633) 00:13:27.999 fused_ordering(634) 00:13:27.999 fused_ordering(635) 00:13:27.999 fused_ordering(636) 00:13:27.999 fused_ordering(637) 00:13:27.999 fused_ordering(638) 00:13:27.999 fused_ordering(639) 00:13:27.999 fused_ordering(640) 00:13:27.999 fused_ordering(641) 00:13:27.999 fused_ordering(642) 00:13:27.999 fused_ordering(643) 00:13:27.999 fused_ordering(644) 00:13:27.999 fused_ordering(645) 00:13:27.999 fused_ordering(646) 00:13:27.999 fused_ordering(647) 00:13:28.000 fused_ordering(648) 00:13:28.000 fused_ordering(649) 00:13:28.000 fused_ordering(650) 00:13:28.000 fused_ordering(651) 00:13:28.000 fused_ordering(652) 00:13:28.000 fused_ordering(653) 00:13:28.000 fused_ordering(654) 00:13:28.000 fused_ordering(655) 00:13:28.000 fused_ordering(656) 00:13:28.000 fused_ordering(657) 00:13:28.000 fused_ordering(658) 00:13:28.000 fused_ordering(659) 00:13:28.000 fused_ordering(660) 00:13:28.000 fused_ordering(661) 00:13:28.000 fused_ordering(662) 00:13:28.000 fused_ordering(663) 00:13:28.000 fused_ordering(664) 00:13:28.000 fused_ordering(665) 00:13:28.000 fused_ordering(666) 00:13:28.000 fused_ordering(667) 00:13:28.000 fused_ordering(668) 00:13:28.000 fused_ordering(669) 00:13:28.000 fused_ordering(670) 00:13:28.000 fused_ordering(671) 00:13:28.000 fused_ordering(672) 00:13:28.000 fused_ordering(673) 00:13:28.000 fused_ordering(674) 00:13:28.000 fused_ordering(675) 00:13:28.000 fused_ordering(676) 00:13:28.000 fused_ordering(677) 00:13:28.000 fused_ordering(678) 00:13:28.000 fused_ordering(679) 00:13:28.000 fused_ordering(680) 00:13:28.000 fused_ordering(681) 00:13:28.000 fused_ordering(682) 00:13:28.000 fused_ordering(683) 00:13:28.000 fused_ordering(684) 00:13:28.000 fused_ordering(685) 00:13:28.000 fused_ordering(686) 00:13:28.000 fused_ordering(687) 00:13:28.000 fused_ordering(688) 00:13:28.000 fused_ordering(689) 00:13:28.000 fused_ordering(690) 00:13:28.000 fused_ordering(691) 00:13:28.000 fused_ordering(692) 00:13:28.000 fused_ordering(693) 00:13:28.000 fused_ordering(694) 00:13:28.000 fused_ordering(695) 00:13:28.000 fused_ordering(696) 00:13:28.000 fused_ordering(697) 00:13:28.000 fused_ordering(698) 00:13:28.000 fused_ordering(699) 00:13:28.000 fused_ordering(700) 00:13:28.000 fused_ordering(701) 00:13:28.000 fused_ordering(702) 00:13:28.000 fused_ordering(703) 00:13:28.000 fused_ordering(704) 00:13:28.000 fused_ordering(705) 00:13:28.000 fused_ordering(706) 00:13:28.000 fused_ordering(707) 00:13:28.000 fused_ordering(708) 00:13:28.000 fused_ordering(709) 00:13:28.000 fused_ordering(710) 00:13:28.000 fused_ordering(711) 00:13:28.000 fused_ordering(712) 00:13:28.000 fused_ordering(713) 00:13:28.000 fused_ordering(714) 00:13:28.000 fused_ordering(715) 00:13:28.000 fused_ordering(716) 00:13:28.000 fused_ordering(717) 00:13:28.000 fused_ordering(718) 00:13:28.000 fused_ordering(719) 00:13:28.000 fused_ordering(720) 00:13:28.000 fused_ordering(721) 00:13:28.000 fused_ordering(722) 00:13:28.000 fused_ordering(723) 00:13:28.000 fused_ordering(724) 00:13:28.000 fused_ordering(725) 00:13:28.000 fused_ordering(726) 00:13:28.000 fused_ordering(727) 00:13:28.000 fused_ordering(728) 00:13:28.000 fused_ordering(729) 00:13:28.000 fused_ordering(730) 00:13:28.000 fused_ordering(731) 00:13:28.000 fused_ordering(732) 00:13:28.000 fused_ordering(733) 00:13:28.000 fused_ordering(734) 00:13:28.000 fused_ordering(735) 00:13:28.000 fused_ordering(736) 00:13:28.000 fused_ordering(737) 00:13:28.000 fused_ordering(738) 00:13:28.000 fused_ordering(739) 00:13:28.000 fused_ordering(740) 00:13:28.000 fused_ordering(741) 00:13:28.000 fused_ordering(742) 00:13:28.000 fused_ordering(743) 00:13:28.000 fused_ordering(744) 00:13:28.000 fused_ordering(745) 00:13:28.000 fused_ordering(746) 00:13:28.000 fused_ordering(747) 00:13:28.000 fused_ordering(748) 00:13:28.000 fused_ordering(749) 00:13:28.000 fused_ordering(750) 00:13:28.000 fused_ordering(751) 00:13:28.000 fused_ordering(752) 00:13:28.000 fused_ordering(753) 00:13:28.000 fused_ordering(754) 00:13:28.000 fused_ordering(755) 00:13:28.000 fused_ordering(756) 00:13:28.000 fused_ordering(757) 00:13:28.000 fused_ordering(758) 00:13:28.000 fused_ordering(759) 00:13:28.000 fused_ordering(760) 00:13:28.000 fused_ordering(761) 00:13:28.000 fused_ordering(762) 00:13:28.000 fused_ordering(763) 00:13:28.000 fused_ordering(764) 00:13:28.000 fused_ordering(765) 00:13:28.000 fused_ordering(766) 00:13:28.000 fused_ordering(767) 00:13:28.000 fused_ordering(768) 00:13:28.000 fused_ordering(769) 00:13:28.000 fused_ordering(770) 00:13:28.000 fused_ordering(771) 00:13:28.000 fused_ordering(772) 00:13:28.000 fused_ordering(773) 00:13:28.000 fused_ordering(774) 00:13:28.000 fused_ordering(775) 00:13:28.000 fused_ordering(776) 00:13:28.000 fused_ordering(777) 00:13:28.000 fused_ordering(778) 00:13:28.000 fused_ordering(779) 00:13:28.000 fused_ordering(780) 00:13:28.000 fused_ordering(781) 00:13:28.000 fused_ordering(782) 00:13:28.000 fused_ordering(783) 00:13:28.000 fused_ordering(784) 00:13:28.000 fused_ordering(785) 00:13:28.000 fused_ordering(786) 00:13:28.000 fused_ordering(787) 00:13:28.000 fused_ordering(788) 00:13:28.000 fused_ordering(789) 00:13:28.000 fused_ordering(790) 00:13:28.000 fused_ordering(791) 00:13:28.000 fused_ordering(792) 00:13:28.000 fused_ordering(793) 00:13:28.000 fused_ordering(794) 00:13:28.000 fused_ordering(795) 00:13:28.000 fused_ordering(796) 00:13:28.000 fused_ordering(797) 00:13:28.000 fused_ordering(798) 00:13:28.000 fused_ordering(799) 00:13:28.000 fused_ordering(800) 00:13:28.000 fused_ordering(801) 00:13:28.000 fused_ordering(802) 00:13:28.000 fused_ordering(803) 00:13:28.000 fused_ordering(804) 00:13:28.000 fused_ordering(805) 00:13:28.000 fused_ordering(806) 00:13:28.000 fused_ordering(807) 00:13:28.000 fused_ordering(808) 00:13:28.000 fused_ordering(809) 00:13:28.000 fused_ordering(810) 00:13:28.000 fused_ordering(811) 00:13:28.000 fused_ordering(812) 00:13:28.000 fused_ordering(813) 00:13:28.000 fused_ordering(814) 00:13:28.000 fused_ordering(815) 00:13:28.000 fused_ordering(816) 00:13:28.000 fused_ordering(817) 00:13:28.000 fused_ordering(818) 00:13:28.000 fused_ordering(819) 00:13:28.000 fused_ordering(820) 00:13:28.259 fused_o[2024-12-10 04:00:27.510037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3f340 is same with the state(6) to be set 00:13:28.259 rdering(821) 00:13:28.259 fused_ordering(822) 00:13:28.259 fused_ordering(823) 00:13:28.259 fused_ordering(824) 00:13:28.259 fused_ordering(825) 00:13:28.259 fused_ordering(826) 00:13:28.259 fused_ordering(827) 00:13:28.259 fused_ordering(828) 00:13:28.259 fused_ordering(829) 00:13:28.259 fused_ordering(830) 00:13:28.259 fused_ordering(831) 00:13:28.259 fused_ordering(832) 00:13:28.259 fused_ordering(833) 00:13:28.259 fused_ordering(834) 00:13:28.259 fused_ordering(835) 00:13:28.259 fused_ordering(836) 00:13:28.259 fused_ordering(837) 00:13:28.259 fused_ordering(838) 00:13:28.259 fused_ordering(839) 00:13:28.259 fused_ordering(840) 00:13:28.259 fused_ordering(841) 00:13:28.259 fused_ordering(842) 00:13:28.259 fused_ordering(843) 00:13:28.259 fused_ordering(844) 00:13:28.259 fused_ordering(845) 00:13:28.259 fused_ordering(846) 00:13:28.259 fused_ordering(847) 00:13:28.259 fused_ordering(848) 00:13:28.259 fused_ordering(849) 00:13:28.259 fused_ordering(850) 00:13:28.259 fused_ordering(851) 00:13:28.259 fused_ordering(852) 00:13:28.259 fused_ordering(853) 00:13:28.259 fused_ordering(854) 00:13:28.259 fused_ordering(855) 00:13:28.259 fused_ordering(856) 00:13:28.259 fused_ordering(857) 00:13:28.259 fused_ordering(858) 00:13:28.259 fused_ordering(859) 00:13:28.259 fused_ordering(860) 00:13:28.259 fused_ordering(861) 00:13:28.259 fused_ordering(862) 00:13:28.259 fused_ordering(863) 00:13:28.259 fused_ordering(864) 00:13:28.259 fused_ordering(865) 00:13:28.259 fused_ordering(866) 00:13:28.259 fused_ordering(867) 00:13:28.259 fused_ordering(868) 00:13:28.259 fused_ordering(869) 00:13:28.259 fused_ordering(870) 00:13:28.259 fused_ordering(871) 00:13:28.259 fused_ordering(872) 00:13:28.259 fused_ordering(873) 00:13:28.259 fused_ordering(874) 00:13:28.259 fused_ordering(875) 00:13:28.259 fused_ordering(876) 00:13:28.259 fused_ordering(877) 00:13:28.259 fused_ordering(878) 00:13:28.259 fused_ordering(879) 00:13:28.259 fused_ordering(880) 00:13:28.259 fused_ordering(881) 00:13:28.259 fused_ordering(882) 00:13:28.259 fused_ordering(883) 00:13:28.259 fused_ordering(884) 00:13:28.259 fused_ordering(885) 00:13:28.259 fused_ordering(886) 00:13:28.259 fused_ordering(887) 00:13:28.259 fused_ordering(888) 00:13:28.259 fused_ordering(889) 00:13:28.259 fused_ordering(890) 00:13:28.259 fused_ordering(891) 00:13:28.259 fused_ordering(892) 00:13:28.259 fused_ordering(893) 00:13:28.259 fused_ordering(894) 00:13:28.259 fused_ordering(895) 00:13:28.259 fused_ordering(896) 00:13:28.259 fused_ordering(897) 00:13:28.259 fused_ordering(898) 00:13:28.259 fused_ordering(899) 00:13:28.259 fused_ordering(900) 00:13:28.259 fused_ordering(901) 00:13:28.259 fused_ordering(902) 00:13:28.259 fused_ordering(903) 00:13:28.259 fused_ordering(904) 00:13:28.259 fused_ordering(905) 00:13:28.259 fused_ordering(906) 00:13:28.259 fused_ordering(907) 00:13:28.259 fused_ordering(908) 00:13:28.259 fused_ordering(909) 00:13:28.259 fused_ordering(910) 00:13:28.259 fused_ordering(911) 00:13:28.259 fused_ordering(912) 00:13:28.259 fused_ordering(913) 00:13:28.259 fused_ordering(914) 00:13:28.259 fused_ordering(915) 00:13:28.259 fused_ordering(916) 00:13:28.259 fused_ordering(917) 00:13:28.259 fused_ordering(918) 00:13:28.259 fused_ordering(919) 00:13:28.260 fused_ordering(920) 00:13:28.260 fused_ordering(921) 00:13:28.260 fused_ordering(922) 00:13:28.260 fused_ordering(923) 00:13:28.260 fused_ordering(924) 00:13:28.260 fused_ordering(925) 00:13:28.260 fused_ordering(926) 00:13:28.260 fused_ordering(927) 00:13:28.260 fused_ordering(928) 00:13:28.260 fused_ordering(929) 00:13:28.260 fused_ordering(930) 00:13:28.260 fused_ordering(931) 00:13:28.260 fused_ordering(932) 00:13:28.260 fused_ordering(933) 00:13:28.260 fused_ordering(934) 00:13:28.260 fused_ordering(935) 00:13:28.260 fused_ordering(936) 00:13:28.260 fused_ordering(937) 00:13:28.260 fused_ordering(938) 00:13:28.260 fused_ordering(939) 00:13:28.260 fused_ordering(940) 00:13:28.260 fused_ordering(941) 00:13:28.260 fused_ordering(942) 00:13:28.260 fused_ordering(943) 00:13:28.260 fused_ordering(944) 00:13:28.260 fused_ordering(945) 00:13:28.260 fused_ordering(946) 00:13:28.260 fused_ordering(947) 00:13:28.260 fused_ordering(948) 00:13:28.260 fused_ordering(949) 00:13:28.260 fused_ordering(950) 00:13:28.260 fused_ordering(951) 00:13:28.260 fused_ordering(952) 00:13:28.260 fused_ordering(953) 00:13:28.260 fused_ordering(954) 00:13:28.260 fused_ordering(955) 00:13:28.260 fused_ordering(956) 00:13:28.260 fused_ordering(957) 00:13:28.260 fused_ordering(958) 00:13:28.260 fused_ordering(959) 00:13:28.260 fused_ordering(960) 00:13:28.260 fused_ordering(961) 00:13:28.260 fused_ordering(962) 00:13:28.260 fused_ordering(963) 00:13:28.260 fused_ordering(964) 00:13:28.260 fused_ordering(965) 00:13:28.260 fused_ordering(966) 00:13:28.260 fused_ordering(967) 00:13:28.260 fused_ordering(968) 00:13:28.260 fused_ordering(969) 00:13:28.260 fused_ordering(970) 00:13:28.260 fused_ordering(971) 00:13:28.260 fused_ordering(972) 00:13:28.260 fused_ordering(973) 00:13:28.260 fused_ordering(974) 00:13:28.260 fused_ordering(975) 00:13:28.260 fused_ordering(976) 00:13:28.260 fused_ordering(977) 00:13:28.260 fused_ordering(978) 00:13:28.260 fused_ordering(979) 00:13:28.260 fused_ordering(980) 00:13:28.260 fused_ordering(981) 00:13:28.260 fused_ordering(982) 00:13:28.260 fused_ordering(983) 00:13:28.260 fused_ordering(984) 00:13:28.260 fused_ordering(985) 00:13:28.260 fused_ordering(986) 00:13:28.260 fused_ordering(987) 00:13:28.260 fused_ordering(988) 00:13:28.260 fused_ordering(989) 00:13:28.260 fused_ordering(990) 00:13:28.260 fused_ordering(991) 00:13:28.260 fused_ordering(992) 00:13:28.260 fused_ordering(993) 00:13:28.260 fused_ordering(994) 00:13:28.260 fused_ordering(995) 00:13:28.260 fused_ordering(996) 00:13:28.260 fused_ordering(997) 00:13:28.260 fused_ordering(998) 00:13:28.260 fused_ordering(999) 00:13:28.260 fused_ordering(1000) 00:13:28.260 fused_ordering(1001) 00:13:28.260 fused_ordering(1002) 00:13:28.260 fused_ordering(1003) 00:13:28.260 fused_ordering(1004) 00:13:28.260 fused_ordering(1005) 00:13:28.260 fused_ordering(1006) 00:13:28.260 fused_ordering(1007) 00:13:28.260 fused_ordering(1008) 00:13:28.260 fused_ordering(1009) 00:13:28.260 fused_ordering(1010) 00:13:28.260 fused_ordering(1011) 00:13:28.260 fused_ordering(1012) 00:13:28.260 fused_ordering(1013) 00:13:28.260 fused_ordering(1014) 00:13:28.260 fused_ordering(1015) 00:13:28.260 fused_ordering(1016) 00:13:28.260 fused_ordering(1017) 00:13:28.260 fused_ordering(1018) 00:13:28.260 fused_ordering(1019) 00:13:28.260 fused_ordering(1020) 00:13:28.260 fused_ordering(1021) 00:13:28.260 fused_ordering(1022) 00:13:28.260 fused_ordering(1023) 00:13:28.260 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:28.260 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:28.260 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:28.260 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:28.260 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:28.260 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:28.260 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:28.260 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:28.260 rmmod nvme_tcp 00:13:28.519 rmmod nvme_fabrics 00:13:28.519 rmmod nvme_keyring 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 8614 ']' 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 8614 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 8614 ']' 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 8614 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 8614 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 8614' 00:13:28.519 killing process with pid 8614 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 8614 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 8614 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:28.519 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:28.778 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:28.778 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:28.778 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.778 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.778 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.684 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:30.684 00:13:30.684 real 0m10.721s 00:13:30.684 user 0m5.171s 00:13:30.684 sys 0m5.787s 00:13:30.684 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.684 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.684 ************************************ 00:13:30.684 END TEST nvmf_fused_ordering 00:13:30.684 ************************************ 00:13:30.684 04:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:30.684 04:00:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:30.684 04:00:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.684 04:00:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.684 ************************************ 00:13:30.684 START TEST nvmf_ns_masking 00:13:30.684 ************************************ 00:13:30.684 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:30.944 * Looking for test storage... 00:13:30.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:30.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.944 --rc genhtml_branch_coverage=1 00:13:30.944 --rc genhtml_function_coverage=1 00:13:30.944 --rc genhtml_legend=1 00:13:30.944 --rc geninfo_all_blocks=1 00:13:30.944 --rc geninfo_unexecuted_blocks=1 00:13:30.944 00:13:30.944 ' 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:30.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.944 --rc genhtml_branch_coverage=1 00:13:30.944 --rc genhtml_function_coverage=1 00:13:30.944 --rc genhtml_legend=1 00:13:30.944 --rc geninfo_all_blocks=1 00:13:30.944 --rc geninfo_unexecuted_blocks=1 00:13:30.944 00:13:30.944 ' 00:13:30.944 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:30.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.944 --rc genhtml_branch_coverage=1 00:13:30.944 --rc genhtml_function_coverage=1 00:13:30.944 --rc genhtml_legend=1 00:13:30.944 --rc geninfo_all_blocks=1 00:13:30.944 --rc geninfo_unexecuted_blocks=1 00:13:30.944 00:13:30.944 ' 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:30.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.945 --rc genhtml_branch_coverage=1 00:13:30.945 --rc genhtml_function_coverage=1 00:13:30.945 --rc genhtml_legend=1 00:13:30.945 --rc geninfo_all_blocks=1 00:13:30.945 --rc geninfo_unexecuted_blocks=1 00:13:30.945 00:13:30.945 ' 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d658eb68-5d7d-40c9-83c2-1acb104e4eb7 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3bcc90ec-7669-4265-b95e-2cf4cc6f2a23 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=56dbd22a-5581-47f1-b177-c2003717c952 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:30.945 04:00:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:37.512 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.512 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:37.513 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:37.513 Found net devices under 0000:af:00.0: cvl_0_0 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:37.513 Found net devices under 0000:af:00.1: cvl_0_1 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:37.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:13:37.513 00:13:37.513 --- 10.0.0.2 ping statistics --- 00:13:37.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.513 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:13:37.513 00:13:37.513 --- 10.0.0.1 ping statistics --- 00:13:37.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.513 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:37.513 04:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=12540 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 12540 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 12540 ']' 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.513 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:37.514 [2024-12-10 04:00:36.069024] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:37.514 [2024-12-10 04:00:36.069065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.514 [2024-12-10 04:00:36.147606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.514 [2024-12-10 04:00:36.184690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.514 [2024-12-10 04:00:36.184725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.514 [2024-12-10 04:00:36.184732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.514 [2024-12-10 04:00:36.184739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.514 [2024-12-10 04:00:36.184743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.514 [2024-12-10 04:00:36.185245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.514 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.514 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:37.514 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:37.514 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:37.514 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:37.514 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.514 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:37.514 [2024-12-10 04:00:36.493002] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.514 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:37.514 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:37.514 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:37.514 Malloc1 00:13:37.514 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:37.772 Malloc2 00:13:37.772 04:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:38.031 04:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:38.290 04:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.290 [2024-12-10 04:00:37.511597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.290 04:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:38.290 04:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 56dbd22a-5581-47f1-b177-c2003717c952 -a 10.0.0.2 -s 4420 -i 4 00:13:38.549 04:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.549 04:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:38.549 04:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.549 04:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:38.549 04:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:40.452 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:40.452 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:40.452 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.452 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:40.452 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.452 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:40.452 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:40.452 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:40.711 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:40.711 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:40.711 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:40.711 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.711 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.711 [ 0]:0x1 00:13:40.711 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.711 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.711 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0a60779340e401d8cc3294787b4d685 00:13:40.711 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0a60779340e401d8cc3294787b4d685 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.711 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.969 [ 0]:0x1 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0a60779340e401d8cc3294787b4d685 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0a60779340e401d8cc3294787b4d685 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:40.969 [ 1]:0x2 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23f1b20dfb7a43de8ea0fa74c34b6b8c 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23f1b20dfb7a43de8ea0fa74c34b6b8c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:40.969 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.228 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.228 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:41.486 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:41.486 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 56dbd22a-5581-47f1-b177-c2003717c952 -a 10.0.0.2 -s 4420 -i 4 00:13:41.486 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:41.486 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:41.486 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.486 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:41.486 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:41.486 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:44.019 [ 0]:0x2 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23f1b20dfb7a43de8ea0fa74c34b6b8c 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23f1b20dfb7a43de8ea0fa74c34b6b8c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.019 04:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:44.019 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:44.019 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.019 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:44.019 [ 0]:0x1 00:13:44.019 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.020 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.020 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0a60779340e401d8cc3294787b4d685 00:13:44.020 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0a60779340e401d8cc3294787b4d685 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.020 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:44.020 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:44.020 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.020 [ 1]:0x2 00:13:44.020 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:44.020 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.020 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23f1b20dfb7a43de8ea0fa74c34b6b8c 00:13:44.020 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23f1b20dfb7a43de8ea0fa74c34b6b8c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.020 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.279 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:44.539 [ 0]:0x2 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23f1b20dfb7a43de8ea0fa74c34b6b8c 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23f1b20dfb7a43de8ea0fa74c34b6b8c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.539 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:44.798 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:44.798 04:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 56dbd22a-5581-47f1-b177-c2003717c952 -a 10.0.0.2 -s 4420 -i 4 00:13:44.798 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:44.798 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:44.798 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.798 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:44.798 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:44.798 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:47.330 [ 0]:0x1 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0a60779340e401d8cc3294787b4d685 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0a60779340e401d8cc3294787b4d685 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.330 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:47.331 [ 1]:0x2 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23f1b20dfb7a43de8ea0fa74c34b6b8c 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23f1b20dfb7a43de8ea0fa74c34b6b8c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.331 [ 0]:0x2 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23f1b20dfb7a43de8ea0fa74c34b6b8c 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23f1b20dfb7a43de8ea0fa74c34b6b8c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:47.331 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:47.590 [2024-12-10 04:00:46.685704] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:47.590 request: 00:13:47.590 { 00:13:47.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.590 "nsid": 2, 00:13:47.590 "host": "nqn.2016-06.io.spdk:host1", 00:13:47.590 "method": "nvmf_ns_remove_host", 00:13:47.590 "req_id": 1 00:13:47.590 } 00:13:47.590 Got JSON-RPC error response 00:13:47.590 response: 00:13:47.590 { 00:13:47.590 "code": -32602, 00:13:47.590 "message": "Invalid parameters" 00:13:47.590 } 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:47.590 [ 0]:0x2 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:47.590 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.849 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23f1b20dfb7a43de8ea0fa74c34b6b8c 00:13:47.849 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23f1b20dfb7a43de8ea0fa74c34b6b8c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.849 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:47.849 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.849 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=14489 00:13:47.849 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.849 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 14489 /var/tmp/host.sock 00:13:47.849 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:47.849 04:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 14489 ']' 00:13:47.849 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:47.849 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.849 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:47.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:47.849 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.849 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:47.849 [2024-12-10 04:00:47.050530] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:47.849 [2024-12-10 04:00:47.050576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid14489 ] 00:13:47.849 [2024-12-10 04:00:47.125267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.108 [2024-12-10 04:00:47.167685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.675 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.675 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:48.675 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.933 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:49.191 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d658eb68-5d7d-40c9-83c2-1acb104e4eb7 00:13:49.191 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:49.191 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D658EB685D7D40C983C21ACB104E4EB7 -i 00:13:49.450 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3bcc90ec-7669-4265-b95e-2cf4cc6f2a23 00:13:49.450 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:49.450 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3BCC90EC76694265B95E2CF4CC6F2A23 -i 00:13:49.450 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:49.708 04:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:49.966 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:49.966 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:50.224 nvme0n1 00:13:50.224 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:50.224 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:50.482 nvme1n2 00:13:50.741 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:50.741 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:50.741 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:50.741 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:50.741 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:50.741 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:50.741 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:50.741 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:50.741 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:50.999 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d658eb68-5d7d-40c9-83c2-1acb104e4eb7 == \d\6\5\8\e\b\6\8\-\5\d\7\d\-\4\0\c\9\-\8\3\c\2\-\1\a\c\b\1\0\4\e\4\e\b\7 ]] 00:13:50.999 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:50.999 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:50.999 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:51.258 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3bcc90ec-7669-4265-b95e-2cf4cc6f2a23 == \3\b\c\c\9\0\e\c\-\7\6\6\9\-\4\2\6\5\-\b\9\5\e\-\2\c\f\4\c\c\6\f\2\a\2\3 ]] 00:13:51.258 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d658eb68-5d7d-40c9-83c2-1acb104e4eb7 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D658EB685D7D40C983C21ACB104E4EB7 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D658EB685D7D40C983C21ACB104E4EB7 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:51.516 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D658EB685D7D40C983C21ACB104E4EB7 00:13:51.774 [2024-12-10 04:00:50.969492] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:51.774 [2024-12-10 04:00:50.969527] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:51.774 [2024-12-10 04:00:50.969536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.774 request: 00:13:51.774 { 00:13:51.774 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.774 "namespace": { 00:13:51.774 "bdev_name": "invalid", 00:13:51.774 "nsid": 1, 00:13:51.774 "nguid": "D658EB685D7D40C983C21ACB104E4EB7", 00:13:51.774 "no_auto_visible": false, 00:13:51.774 "hide_metadata": false 00:13:51.774 }, 00:13:51.774 "method": "nvmf_subsystem_add_ns", 00:13:51.774 "req_id": 1 00:13:51.774 } 00:13:51.774 Got JSON-RPC error response 00:13:51.774 response: 00:13:51.774 { 00:13:51.774 "code": -32602, 00:13:51.774 "message": "Invalid parameters" 00:13:51.774 } 00:13:51.774 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:51.774 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:51.774 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:51.774 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:51.774 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d658eb68-5d7d-40c9-83c2-1acb104e4eb7 00:13:51.774 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:51.774 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D658EB685D7D40C983C21ACB104E4EB7 -i 00:13:52.033 04:00:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:53.936 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:53.936 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:53.936 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 14489 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 14489 ']' 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 14489 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 14489 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 14489' 00:13:54.195 killing process with pid 14489 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 14489 00:13:54.195 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 14489 00:13:54.763 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.763 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:54.763 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:54.763 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:54.763 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:54.763 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:54.763 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:54.763 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:54.763 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:54.763 rmmod nvme_tcp 00:13:54.763 rmmod nvme_fabrics 00:13:54.763 rmmod nvme_keyring 00:13:54.763 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:54.763 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:54.763 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:54.763 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 12540 ']' 00:13:54.763 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 12540 00:13:54.763 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 12540 ']' 00:13:54.763 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 12540 00:13:54.763 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:54.763 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.763 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 12540 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 12540' 00:13:55.022 killing process with pid 12540 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 12540 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 12540 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.022 04:00:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:57.565 00:13:57.565 real 0m26.393s 00:13:57.565 user 0m32.250s 00:13:57.565 sys 0m7.077s 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:57.565 ************************************ 00:13:57.565 END TEST nvmf_ns_masking 00:13:57.565 ************************************ 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.565 ************************************ 00:13:57.565 START TEST nvmf_nvme_cli 00:13:57.565 ************************************ 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:57.565 * Looking for test storage... 00:13:57.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:57.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.565 --rc genhtml_branch_coverage=1 00:13:57.565 --rc genhtml_function_coverage=1 00:13:57.565 --rc genhtml_legend=1 00:13:57.565 --rc geninfo_all_blocks=1 00:13:57.565 --rc geninfo_unexecuted_blocks=1 00:13:57.565 00:13:57.565 ' 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:57.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.565 --rc genhtml_branch_coverage=1 00:13:57.565 --rc genhtml_function_coverage=1 00:13:57.565 --rc genhtml_legend=1 00:13:57.565 --rc geninfo_all_blocks=1 00:13:57.565 --rc geninfo_unexecuted_blocks=1 00:13:57.565 00:13:57.565 ' 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:57.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.565 --rc genhtml_branch_coverage=1 00:13:57.565 --rc genhtml_function_coverage=1 00:13:57.565 --rc genhtml_legend=1 00:13:57.565 --rc geninfo_all_blocks=1 00:13:57.565 --rc geninfo_unexecuted_blocks=1 00:13:57.565 00:13:57.565 ' 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:57.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.565 --rc genhtml_branch_coverage=1 00:13:57.565 --rc genhtml_function_coverage=1 00:13:57.565 --rc genhtml_legend=1 00:13:57.565 --rc geninfo_all_blocks=1 00:13:57.565 --rc geninfo_unexecuted_blocks=1 00:13:57.565 00:13:57.565 ' 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.565 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:57.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:57.566 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:03.035 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:03.035 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:03.035 Found net devices under 0000:af:00.0: cvl_0_0 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:03.035 Found net devices under 0000:af:00.1: cvl_0_1 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.035 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.036 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:03.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:14:03.295 00:14:03.295 --- 10.0.0.2 ping statistics --- 00:14:03.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.295 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:14:03.295 00:14:03.295 --- 10.0.0.1 ping statistics --- 00:14:03.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.295 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=19125 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 19125 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 19125 ']' 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.295 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.296 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.296 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.296 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.555 [2024-12-10 04:01:02.580764] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:03.555 [2024-12-10 04:01:02.580813] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.555 [2024-12-10 04:01:02.660954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.555 [2024-12-10 04:01:02.703061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.555 [2024-12-10 04:01:02.703097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.555 [2024-12-10 04:01:02.703104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.555 [2024-12-10 04:01:02.703110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.555 [2024-12-10 04:01:02.703115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.555 [2024-12-10 04:01:02.704508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.555 [2024-12-10 04:01:02.704620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.555 [2024-12-10 04:01:02.704726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.555 [2024-12-10 04:01:02.704728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.555 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.555 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:03.555 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:03.555 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:03.555 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.555 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.555 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.555 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.555 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.555 [2024-12-10 04:01:02.837514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.814 Malloc0 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.814 Malloc1 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.814 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.815 [2024-12-10 04:01:02.931367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.815 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:03.815 00:14:03.815 Discovery Log Number of Records 2, Generation counter 2 00:14:03.815 =====Discovery Log Entry 0====== 00:14:03.815 trtype: tcp 00:14:03.815 adrfam: ipv4 00:14:03.815 subtype: current discovery subsystem 00:14:03.815 treq: not required 00:14:03.815 portid: 0 00:14:03.815 trsvcid: 4420 00:14:03.815 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:03.815 traddr: 10.0.0.2 00:14:03.815 eflags: explicit discovery connections, duplicate discovery information 00:14:03.815 sectype: none 00:14:03.815 =====Discovery Log Entry 1====== 00:14:03.815 trtype: tcp 00:14:03.815 adrfam: ipv4 00:14:03.815 subtype: nvme subsystem 00:14:03.815 treq: not required 00:14:03.815 portid: 0 00:14:03.815 trsvcid: 4420 00:14:03.815 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:03.815 traddr: 10.0.0.2 00:14:03.815 eflags: none 00:14:03.815 sectype: none 00:14:03.815 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:03.815 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:03.815 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:03.815 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:03.815 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:03.815 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:03.815 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:03.815 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:03.815 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:03.815 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:03.815 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:05.192 04:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:05.192 04:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:05.192 04:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.192 04:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:05.192 04:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:05.192 04:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:07.095 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:07.095 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:07.095 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.095 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:07.095 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.095 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:07.095 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:07.095 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:07.095 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.095 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:07.354 /dev/nvme0n2 ]] 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:07.354 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.613 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:07.613 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:07.613 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:07.613 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.614 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:07.614 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.614 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:07.614 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:07.614 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.614 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.614 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.873 rmmod nvme_tcp 00:14:07.873 rmmod nvme_fabrics 00:14:07.873 rmmod nvme_keyring 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 19125 ']' 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 19125 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 19125 ']' 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 19125 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.873 04:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 19125 00:14:07.873 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.873 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.873 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 19125' 00:14:07.873 killing process with pid 19125 00:14:07.873 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 19125 00:14:07.873 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 19125 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.131 04:01:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.040 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:10.040 00:14:10.040 real 0m12.896s 00:14:10.040 user 0m19.621s 00:14:10.040 sys 0m5.156s 00:14:10.040 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.040 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.040 ************************************ 00:14:10.040 END TEST nvmf_nvme_cli 00:14:10.040 ************************************ 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.301 ************************************ 00:14:10.301 START TEST nvmf_vfio_user 00:14:10.301 ************************************ 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:10.301 * Looking for test storage... 00:14:10.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.301 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.302 --rc genhtml_branch_coverage=1 00:14:10.302 --rc genhtml_function_coverage=1 00:14:10.302 --rc genhtml_legend=1 00:14:10.302 --rc geninfo_all_blocks=1 00:14:10.302 --rc geninfo_unexecuted_blocks=1 00:14:10.302 00:14:10.302 ' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.302 --rc genhtml_branch_coverage=1 00:14:10.302 --rc genhtml_function_coverage=1 00:14:10.302 --rc genhtml_legend=1 00:14:10.302 --rc geninfo_all_blocks=1 00:14:10.302 --rc geninfo_unexecuted_blocks=1 00:14:10.302 00:14:10.302 ' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.302 --rc genhtml_branch_coverage=1 00:14:10.302 --rc genhtml_function_coverage=1 00:14:10.302 --rc genhtml_legend=1 00:14:10.302 --rc geninfo_all_blocks=1 00:14:10.302 --rc geninfo_unexecuted_blocks=1 00:14:10.302 00:14:10.302 ' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.302 --rc genhtml_branch_coverage=1 00:14:10.302 --rc genhtml_function_coverage=1 00:14:10.302 --rc genhtml_legend=1 00:14:10.302 --rc geninfo_all_blocks=1 00:14:10.302 --rc geninfo_unexecuted_blocks=1 00:14:10.302 00:14:10.302 ' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=20388 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 20388' 00:14:10.302 Process pid: 20388 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 20388 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 20388 ']' 00:14:10.302 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.303 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.303 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.303 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.303 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:10.562 [2024-12-10 04:01:09.623259] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:10.562 [2024-12-10 04:01:09.623309] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.562 [2024-12-10 04:01:09.698525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.562 [2024-12-10 04:01:09.737243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.562 [2024-12-10 04:01:09.737281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.562 [2024-12-10 04:01:09.737289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.562 [2024-12-10 04:01:09.737295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.562 [2024-12-10 04:01:09.737301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.562 [2024-12-10 04:01:09.738716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.562 [2024-12-10 04:01:09.738827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.562 [2024-12-10 04:01:09.738932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.562 [2024-12-10 04:01:09.738933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.562 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.562 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:10.562 04:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:11.939 04:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:11.939 04:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:11.939 04:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:11.939 04:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:11.939 04:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:11.939 04:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:12.198 Malloc1 00:14:12.198 04:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:12.198 04:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:12.456 04:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:12.715 04:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:12.715 04:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:12.715 04:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:12.974 Malloc2 00:14:12.974 04:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:13.233 04:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:13.233 04:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:13.494 04:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:13.494 04:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:13.494 04:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:13.494 04:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:13.494 04:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:13.494 04:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:13.494 [2024-12-10 04:01:12.682547] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:13.494 [2024-12-10 04:01:12.682590] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20861 ] 00:14:13.494 [2024-12-10 04:01:12.724451] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:13.494 [2024-12-10 04:01:12.726732] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:13.494 [2024-12-10 04:01:12.726754] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f86c9f01000 00:14:13.494 [2024-12-10 04:01:12.727733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.494 [2024-12-10 04:01:12.728734] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.494 [2024-12-10 04:01:12.729743] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.494 [2024-12-10 04:01:12.730748] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.494 [2024-12-10 04:01:12.731753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.494 [2024-12-10 04:01:12.732764] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.494 [2024-12-10 04:01:12.733770] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.494 [2024-12-10 04:01:12.734773] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.494 [2024-12-10 04:01:12.735784] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:13.495 [2024-12-10 04:01:12.735792] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f86c9ef6000 00:14:13.495 [2024-12-10 04:01:12.736706] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:13.495 [2024-12-10 04:01:12.749423] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:13.495 [2024-12-10 04:01:12.749447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:13.495 [2024-12-10 04:01:12.754888] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:13.495 [2024-12-10 04:01:12.754931] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:13.495 [2024-12-10 04:01:12.755006] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:13.495 [2024-12-10 04:01:12.755023] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:13.495 [2024-12-10 04:01:12.755028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:13.495 [2024-12-10 04:01:12.755885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:13.495 [2024-12-10 04:01:12.755894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:13.495 [2024-12-10 04:01:12.755901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:13.495 [2024-12-10 04:01:12.756890] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:13.495 [2024-12-10 04:01:12.756899] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:13.495 [2024-12-10 04:01:12.756908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:13.495 [2024-12-10 04:01:12.757897] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:13.495 [2024-12-10 04:01:12.757905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:13.495 [2024-12-10 04:01:12.758902] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:13.495 [2024-12-10 04:01:12.758909] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:13.495 [2024-12-10 04:01:12.758914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:13.495 [2024-12-10 04:01:12.758920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:13.495 [2024-12-10 04:01:12.759027] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:13.495 [2024-12-10 04:01:12.759031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:13.495 [2024-12-10 04:01:12.759036] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:13.495 [2024-12-10 04:01:12.759906] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:13.495 [2024-12-10 04:01:12.760907] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:13.495 [2024-12-10 04:01:12.761916] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:13.495 [2024-12-10 04:01:12.762914] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:13.495 [2024-12-10 04:01:12.762992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:13.495 [2024-12-10 04:01:12.763924] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:13.495 [2024-12-10 04:01:12.763932] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:13.495 [2024-12-10 04:01:12.763936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.763953] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:13.495 [2024-12-10 04:01:12.763960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.763976] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.495 [2024-12-10 04:01:12.763981] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.495 [2024-12-10 04:01:12.763985] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.495 [2024-12-10 04:01:12.763999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.495 [2024-12-10 04:01:12.764037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:13.495 [2024-12-10 04:01:12.764048] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:13.495 [2024-12-10 04:01:12.764052] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:13.495 [2024-12-10 04:01:12.764056] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:13.495 [2024-12-10 04:01:12.764060] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:13.495 [2024-12-10 04:01:12.764064] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:13.495 [2024-12-10 04:01:12.764068] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:13.495 [2024-12-10 04:01:12.764072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764088] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:13.495 [2024-12-10 04:01:12.764103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:13.495 [2024-12-10 04:01:12.764112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.495 [2024-12-10 04:01:12.764120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.495 [2024-12-10 04:01:12.764127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.495 [2024-12-10 04:01:12.764135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.495 [2024-12-10 04:01:12.764139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:13.495 [2024-12-10 04:01:12.764164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:13.495 [2024-12-10 04:01:12.764174] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:13.495 [2024-12-10 04:01:12.764179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764199] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:13.495 [2024-12-10 04:01:12.764216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:13.495 [2024-12-10 04:01:12.764264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764279] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:13.495 [2024-12-10 04:01:12.764283] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:13.495 [2024-12-10 04:01:12.764286] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.495 [2024-12-10 04:01:12.764291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:13.495 [2024-12-10 04:01:12.764301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:13.495 [2024-12-10 04:01:12.764311] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:13.495 [2024-12-10 04:01:12.764321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764334] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.495 [2024-12-10 04:01:12.764338] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.495 [2024-12-10 04:01:12.764340] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.495 [2024-12-10 04:01:12.764346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.495 [2024-12-10 04:01:12.764371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:13.495 [2024-12-10 04:01:12.764380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:13.495 [2024-12-10 04:01:12.764393] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.496 [2024-12-10 04:01:12.764397] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.496 [2024-12-10 04:01:12.764400] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.496 [2024-12-10 04:01:12.764405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.496 [2024-12-10 04:01:12.764414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:13.496 [2024-12-10 04:01:12.764423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:13.496 [2024-12-10 04:01:12.764429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:13.496 [2024-12-10 04:01:12.764436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:13.496 [2024-12-10 04:01:12.764441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:13.496 [2024-12-10 04:01:12.764445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:13.496 [2024-12-10 04:01:12.764451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:13.496 [2024-12-10 04:01:12.764455] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:13.496 [2024-12-10 04:01:12.764460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:13.496 [2024-12-10 04:01:12.764464] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:13.496 [2024-12-10 04:01:12.764480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:13.496 [2024-12-10 04:01:12.764492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:13.496 [2024-12-10 04:01:12.764502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:13.496 [2024-12-10 04:01:12.764512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:13.496 [2024-12-10 04:01:12.764521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:13.496 [2024-12-10 04:01:12.764533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:13.496 [2024-12-10 04:01:12.764543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:13.496 [2024-12-10 04:01:12.764553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:13.496 [2024-12-10 04:01:12.764564] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:13.496 [2024-12-10 04:01:12.764568] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:13.496 [2024-12-10 04:01:12.764571] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:13.496 [2024-12-10 04:01:12.764574] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:13.496 [2024-12-10 04:01:12.764577] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:13.496 [2024-12-10 04:01:12.764583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:13.496 [2024-12-10 04:01:12.764589] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:13.496 [2024-12-10 04:01:12.764593] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:13.496 [2024-12-10 04:01:12.764596] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.496 [2024-12-10 04:01:12.764601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:13.496 [2024-12-10 04:01:12.764607] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:13.496 [2024-12-10 04:01:12.764611] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.496 [2024-12-10 04:01:12.764614] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.496 [2024-12-10 04:01:12.764619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.496 [2024-12-10 04:01:12.764626] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:13.496 [2024-12-10 04:01:12.764631] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:13.496 [2024-12-10 04:01:12.764634] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.496 [2024-12-10 04:01:12.764639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:13.496 [2024-12-10 04:01:12.764645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:13.496 [2024-12-10 04:01:12.764656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:13.496 [2024-12-10 04:01:12.764665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:13.496 [2024-12-10 04:01:12.764671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:13.496 ===================================================== 00:14:13.496 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:13.496 ===================================================== 00:14:13.496 Controller Capabilities/Features 00:14:13.496 ================================ 00:14:13.496 Vendor ID: 4e58 00:14:13.496 Subsystem Vendor ID: 4e58 00:14:13.496 Serial Number: SPDK1 00:14:13.496 Model Number: SPDK bdev Controller 00:14:13.496 Firmware Version: 25.01 00:14:13.496 Recommended Arb Burst: 6 00:14:13.496 IEEE OUI Identifier: 8d 6b 50 00:14:13.496 Multi-path I/O 00:14:13.496 May have multiple subsystem ports: Yes 00:14:13.496 May have multiple controllers: Yes 00:14:13.496 Associated with SR-IOV VF: No 00:14:13.496 Max Data Transfer Size: 131072 00:14:13.496 Max Number of Namespaces: 32 00:14:13.496 Max Number of I/O Queues: 127 00:14:13.496 NVMe Specification Version (VS): 1.3 00:14:13.496 NVMe Specification Version (Identify): 1.3 00:14:13.496 Maximum Queue Entries: 256 00:14:13.496 Contiguous Queues Required: Yes 00:14:13.496 Arbitration Mechanisms Supported 00:14:13.496 Weighted Round Robin: Not Supported 00:14:13.496 Vendor Specific: Not Supported 00:14:13.496 Reset Timeout: 15000 ms 00:14:13.496 Doorbell Stride: 4 bytes 00:14:13.496 NVM Subsystem Reset: Not Supported 00:14:13.496 Command Sets Supported 00:14:13.496 NVM Command Set: Supported 00:14:13.496 Boot Partition: Not Supported 00:14:13.496 Memory Page Size Minimum: 4096 bytes 00:14:13.496 Memory Page Size Maximum: 4096 bytes 00:14:13.496 Persistent Memory Region: Not Supported 00:14:13.496 Optional Asynchronous Events Supported 00:14:13.496 Namespace Attribute Notices: Supported 00:14:13.496 Firmware Activation Notices: Not Supported 00:14:13.496 ANA Change Notices: Not Supported 00:14:13.496 PLE Aggregate Log Change Notices: Not Supported 00:14:13.496 LBA Status Info Alert Notices: Not Supported 00:14:13.496 EGE Aggregate Log Change Notices: Not Supported 00:14:13.496 Normal NVM Subsystem Shutdown event: Not Supported 00:14:13.496 Zone Descriptor Change Notices: Not Supported 00:14:13.496 Discovery Log Change Notices: Not Supported 00:14:13.496 Controller Attributes 00:14:13.496 128-bit Host Identifier: Supported 00:14:13.496 Non-Operational Permissive Mode: Not Supported 00:14:13.496 NVM Sets: Not Supported 00:14:13.496 Read Recovery Levels: Not Supported 00:14:13.496 Endurance Groups: Not Supported 00:14:13.496 Predictable Latency Mode: Not Supported 00:14:13.496 Traffic Based Keep ALive: Not Supported 00:14:13.496 Namespace Granularity: Not Supported 00:14:13.496 SQ Associations: Not Supported 00:14:13.496 UUID List: Not Supported 00:14:13.496 Multi-Domain Subsystem: Not Supported 00:14:13.496 Fixed Capacity Management: Not Supported 00:14:13.496 Variable Capacity Management: Not Supported 00:14:13.496 Delete Endurance Group: Not Supported 00:14:13.496 Delete NVM Set: Not Supported 00:14:13.496 Extended LBA Formats Supported: Not Supported 00:14:13.496 Flexible Data Placement Supported: Not Supported 00:14:13.496 00:14:13.496 Controller Memory Buffer Support 00:14:13.496 ================================ 00:14:13.496 Supported: No 00:14:13.496 00:14:13.496 Persistent Memory Region Support 00:14:13.496 ================================ 00:14:13.496 Supported: No 00:14:13.496 00:14:13.496 Admin Command Set Attributes 00:14:13.496 ============================ 00:14:13.496 Security Send/Receive: Not Supported 00:14:13.496 Format NVM: Not Supported 00:14:13.496 Firmware Activate/Download: Not Supported 00:14:13.496 Namespace Management: Not Supported 00:14:13.496 Device Self-Test: Not Supported 00:14:13.496 Directives: Not Supported 00:14:13.496 NVMe-MI: Not Supported 00:14:13.496 Virtualization Management: Not Supported 00:14:13.496 Doorbell Buffer Config: Not Supported 00:14:13.496 Get LBA Status Capability: Not Supported 00:14:13.496 Command & Feature Lockdown Capability: Not Supported 00:14:13.496 Abort Command Limit: 4 00:14:13.496 Async Event Request Limit: 4 00:14:13.496 Number of Firmware Slots: N/A 00:14:13.496 Firmware Slot 1 Read-Only: N/A 00:14:13.496 Firmware Activation Without Reset: N/A 00:14:13.496 Multiple Update Detection Support: N/A 00:14:13.496 Firmware Update Granularity: No Information Provided 00:14:13.496 Per-Namespace SMART Log: No 00:14:13.496 Asymmetric Namespace Access Log Page: Not Supported 00:14:13.496 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:13.496 Command Effects Log Page: Supported 00:14:13.496 Get Log Page Extended Data: Supported 00:14:13.496 Telemetry Log Pages: Not Supported 00:14:13.497 Persistent Event Log Pages: Not Supported 00:14:13.497 Supported Log Pages Log Page: May Support 00:14:13.497 Commands Supported & Effects Log Page: Not Supported 00:14:13.497 Feature Identifiers & Effects Log Page:May Support 00:14:13.497 NVMe-MI Commands & Effects Log Page: May Support 00:14:13.497 Data Area 4 for Telemetry Log: Not Supported 00:14:13.497 Error Log Page Entries Supported: 128 00:14:13.497 Keep Alive: Supported 00:14:13.497 Keep Alive Granularity: 10000 ms 00:14:13.497 00:14:13.497 NVM Command Set Attributes 00:14:13.497 ========================== 00:14:13.497 Submission Queue Entry Size 00:14:13.497 Max: 64 00:14:13.497 Min: 64 00:14:13.497 Completion Queue Entry Size 00:14:13.497 Max: 16 00:14:13.497 Min: 16 00:14:13.497 Number of Namespaces: 32 00:14:13.497 Compare Command: Supported 00:14:13.497 Write Uncorrectable Command: Not Supported 00:14:13.497 Dataset Management Command: Supported 00:14:13.497 Write Zeroes Command: Supported 00:14:13.497 Set Features Save Field: Not Supported 00:14:13.497 Reservations: Not Supported 00:14:13.497 Timestamp: Not Supported 00:14:13.497 Copy: Supported 00:14:13.497 Volatile Write Cache: Present 00:14:13.497 Atomic Write Unit (Normal): 1 00:14:13.497 Atomic Write Unit (PFail): 1 00:14:13.497 Atomic Compare & Write Unit: 1 00:14:13.497 Fused Compare & Write: Supported 00:14:13.497 Scatter-Gather List 00:14:13.497 SGL Command Set: Supported (Dword aligned) 00:14:13.497 SGL Keyed: Not Supported 00:14:13.497 SGL Bit Bucket Descriptor: Not Supported 00:14:13.497 SGL Metadata Pointer: Not Supported 00:14:13.497 Oversized SGL: Not Supported 00:14:13.497 SGL Metadata Address: Not Supported 00:14:13.497 SGL Offset: Not Supported 00:14:13.497 Transport SGL Data Block: Not Supported 00:14:13.497 Replay Protected Memory Block: Not Supported 00:14:13.497 00:14:13.497 Firmware Slot Information 00:14:13.497 ========================= 00:14:13.497 Active slot: 1 00:14:13.497 Slot 1 Firmware Revision: 25.01 00:14:13.497 00:14:13.497 00:14:13.497 Commands Supported and Effects 00:14:13.497 ============================== 00:14:13.497 Admin Commands 00:14:13.497 -------------- 00:14:13.497 Get Log Page (02h): Supported 00:14:13.497 Identify (06h): Supported 00:14:13.497 Abort (08h): Supported 00:14:13.497 Set Features (09h): Supported 00:14:13.497 Get Features (0Ah): Supported 00:14:13.497 Asynchronous Event Request (0Ch): Supported 00:14:13.497 Keep Alive (18h): Supported 00:14:13.497 I/O Commands 00:14:13.497 ------------ 00:14:13.497 Flush (00h): Supported LBA-Change 00:14:13.497 Write (01h): Supported LBA-Change 00:14:13.497 Read (02h): Supported 00:14:13.497 Compare (05h): Supported 00:14:13.497 Write Zeroes (08h): Supported LBA-Change 00:14:13.497 Dataset Management (09h): Supported LBA-Change 00:14:13.497 Copy (19h): Supported LBA-Change 00:14:13.497 00:14:13.497 Error Log 00:14:13.497 ========= 00:14:13.497 00:14:13.497 Arbitration 00:14:13.497 =========== 00:14:13.497 Arbitration Burst: 1 00:14:13.497 00:14:13.497 Power Management 00:14:13.497 ================ 00:14:13.497 Number of Power States: 1 00:14:13.497 Current Power State: Power State #0 00:14:13.497 Power State #0: 00:14:13.497 Max Power: 0.00 W 00:14:13.497 Non-Operational State: Operational 00:14:13.497 Entry Latency: Not Reported 00:14:13.497 Exit Latency: Not Reported 00:14:13.497 Relative Read Throughput: 0 00:14:13.497 Relative Read Latency: 0 00:14:13.497 Relative Write Throughput: 0 00:14:13.497 Relative Write Latency: 0 00:14:13.497 Idle Power: Not Reported 00:14:13.497 Active Power: Not Reported 00:14:13.497 Non-Operational Permissive Mode: Not Supported 00:14:13.497 00:14:13.497 Health Information 00:14:13.497 ================== 00:14:13.497 Critical Warnings: 00:14:13.497 Available Spare Space: OK 00:14:13.497 Temperature: OK 00:14:13.497 Device Reliability: OK 00:14:13.497 Read Only: No 00:14:13.497 Volatile Memory Backup: OK 00:14:13.497 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:13.497 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:13.497 Available Spare: 0% 00:14:13.497 Available Sp[2024-12-10 04:01:12.764754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:13.497 [2024-12-10 04:01:12.764764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:13.497 [2024-12-10 04:01:12.764789] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:13.497 [2024-12-10 04:01:12.764797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.497 [2024-12-10 04:01:12.764803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.497 [2024-12-10 04:01:12.764808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.497 [2024-12-10 04:01:12.764813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.497 [2024-12-10 04:01:12.764932] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:13.497 [2024-12-10 04:01:12.764942] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:13.497 [2024-12-10 04:01:12.765933] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:13.497 [2024-12-10 04:01:12.765981] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:13.497 [2024-12-10 04:01:12.765987] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:13.497 [2024-12-10 04:01:12.766934] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:13.497 [2024-12-10 04:01:12.766944] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:13.497 [2024-12-10 04:01:12.766990] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:13.497 [2024-12-10 04:01:12.767958] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:13.756 are Threshold: 0% 00:14:13.756 Life Percentage Used: 0% 00:14:13.756 Data Units Read: 0 00:14:13.756 Data Units Written: 0 00:14:13.756 Host Read Commands: 0 00:14:13.756 Host Write Commands: 0 00:14:13.756 Controller Busy Time: 0 minutes 00:14:13.756 Power Cycles: 0 00:14:13.756 Power On Hours: 0 hours 00:14:13.756 Unsafe Shutdowns: 0 00:14:13.756 Unrecoverable Media Errors: 0 00:14:13.756 Lifetime Error Log Entries: 0 00:14:13.756 Warning Temperature Time: 0 minutes 00:14:13.756 Critical Temperature Time: 0 minutes 00:14:13.756 00:14:13.756 Number of Queues 00:14:13.756 ================ 00:14:13.756 Number of I/O Submission Queues: 127 00:14:13.756 Number of I/O Completion Queues: 127 00:14:13.756 00:14:13.756 Active Namespaces 00:14:13.756 ================= 00:14:13.756 Namespace ID:1 00:14:13.756 Error Recovery Timeout: Unlimited 00:14:13.756 Command Set Identifier: NVM (00h) 00:14:13.756 Deallocate: Supported 00:14:13.756 Deallocated/Unwritten Error: Not Supported 00:14:13.756 Deallocated Read Value: Unknown 00:14:13.756 Deallocate in Write Zeroes: Not Supported 00:14:13.756 Deallocated Guard Field: 0xFFFF 00:14:13.756 Flush: Supported 00:14:13.756 Reservation: Supported 00:14:13.756 Namespace Sharing Capabilities: Multiple Controllers 00:14:13.756 Size (in LBAs): 131072 (0GiB) 00:14:13.756 Capacity (in LBAs): 131072 (0GiB) 00:14:13.757 Utilization (in LBAs): 131072 (0GiB) 00:14:13.757 NGUID: 04A2342C8ABE42E68F02035B50DB16EB 00:14:13.757 UUID: 04a2342c-8abe-42e6-8f02-035b50db16eb 00:14:13.757 Thin Provisioning: Not Supported 00:14:13.757 Per-NS Atomic Units: Yes 00:14:13.757 Atomic Boundary Size (Normal): 0 00:14:13.757 Atomic Boundary Size (PFail): 0 00:14:13.757 Atomic Boundary Offset: 0 00:14:13.757 Maximum Single Source Range Length: 65535 00:14:13.757 Maximum Copy Length: 65535 00:14:13.757 Maximum Source Range Count: 1 00:14:13.757 NGUID/EUI64 Never Reused: No 00:14:13.757 Namespace Write Protected: No 00:14:13.757 Number of LBA Formats: 1 00:14:13.757 Current LBA Format: LBA Format #00 00:14:13.757 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:13.757 00:14:13.757 04:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:13.757 [2024-12-10 04:01:12.994983] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:19.028 Initializing NVMe Controllers 00:14:19.028 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:19.028 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:19.028 Initialization complete. Launching workers. 00:14:19.028 ======================================================== 00:14:19.028 Latency(us) 00:14:19.028 Device Information : IOPS MiB/s Average min max 00:14:19.028 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39956.60 156.08 3203.62 979.05 9382.92 00:14:19.028 ======================================================== 00:14:19.028 Total : 39956.60 156.08 3203.62 979.05 9382.92 00:14:19.028 00:14:19.028 [2024-12-10 04:01:18.019685] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:19.028 04:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:19.028 [2024-12-10 04:01:18.253745] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.298 Initializing NVMe Controllers 00:14:24.298 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:24.298 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:24.298 Initialization complete. Launching workers. 00:14:24.298 ======================================================== 00:14:24.298 Latency(us) 00:14:24.298 Device Information : IOPS MiB/s Average min max 00:14:24.298 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.09 62.71 7978.40 5985.06 9975.57 00:14:24.298 ======================================================== 00:14:24.298 Total : 16054.09 62.71 7978.40 5985.06 9975.57 00:14:24.298 00:14:24.298 [2024-12-10 04:01:23.295265] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:24.298 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:24.298 [2024-12-10 04:01:23.502271] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:29.571 [2024-12-10 04:01:28.575466] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:29.571 Initializing NVMe Controllers 00:14:29.571 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:29.571 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:29.571 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:29.571 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:29.571 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:29.571 Initialization complete. Launching workers. 00:14:29.571 Starting thread on core 2 00:14:29.571 Starting thread on core 3 00:14:29.571 Starting thread on core 1 00:14:29.571 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:29.830 [2024-12-10 04:01:28.869598] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:33.120 [2024-12-10 04:01:31.941384] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:33.120 Initializing NVMe Controllers 00:14:33.120 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.120 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.120 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:33.120 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:33.120 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:33.120 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:33.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:33.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:33.120 Initialization complete. Launching workers. 00:14:33.120 Starting thread on core 1 with urgent priority queue 00:14:33.120 Starting thread on core 2 with urgent priority queue 00:14:33.120 Starting thread on core 3 with urgent priority queue 00:14:33.120 Starting thread on core 0 with urgent priority queue 00:14:33.120 SPDK bdev Controller (SPDK1 ) core 0: 7557.67 IO/s 13.23 secs/100000 ios 00:14:33.120 SPDK bdev Controller (SPDK1 ) core 1: 7743.00 IO/s 12.91 secs/100000 ios 00:14:33.120 SPDK bdev Controller (SPDK1 ) core 2: 8280.33 IO/s 12.08 secs/100000 ios 00:14:33.120 SPDK bdev Controller (SPDK1 ) core 3: 8858.33 IO/s 11.29 secs/100000 ios 00:14:33.120 ======================================================== 00:14:33.120 00:14:33.120 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:33.120 [2024-12-10 04:01:32.220121] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:33.120 Initializing NVMe Controllers 00:14:33.120 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.120 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.120 Namespace ID: 1 size: 0GB 00:14:33.120 Initialization complete. 00:14:33.120 INFO: using host memory buffer for IO 00:14:33.120 Hello world! 00:14:33.120 [2024-12-10 04:01:32.253321] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:33.120 04:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:33.379 [2024-12-10 04:01:32.535068] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.315 Initializing NVMe Controllers 00:14:34.315 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:34.315 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:34.315 Initialization complete. Launching workers. 00:14:34.315 submit (in ns) avg, min, max = 6364.8, 3200.0, 3999085.7 00:14:34.315 complete (in ns) avg, min, max = 22381.4, 1757.1, 5994774.3 00:14:34.315 00:14:34.315 Submit histogram 00:14:34.315 ================ 00:14:34.316 Range in us Cumulative Count 00:14:34.316 3.200 - 3.215: 0.4514% ( 73) 00:14:34.316 3.215 - 3.230: 2.2324% ( 288) 00:14:34.316 3.230 - 3.246: 6.4251% ( 678) 00:14:34.316 3.246 - 3.261: 12.4791% ( 979) 00:14:34.316 3.261 - 3.276: 18.9413% ( 1045) 00:14:34.316 3.276 - 3.291: 26.2321% ( 1179) 00:14:34.316 3.291 - 3.307: 32.9664% ( 1089) 00:14:34.316 3.307 - 3.322: 38.2351% ( 852) 00:14:34.316 3.322 - 3.337: 43.1390% ( 793) 00:14:34.316 3.337 - 3.352: 48.0366% ( 792) 00:14:34.316 3.352 - 3.368: 51.9943% ( 640) 00:14:34.316 3.368 - 3.383: 56.3045% ( 697) 00:14:34.316 3.383 - 3.398: 63.4778% ( 1160) 00:14:34.316 3.398 - 3.413: 68.9815% ( 890) 00:14:34.316 3.413 - 3.429: 74.1698% ( 839) 00:14:34.316 3.429 - 3.444: 79.6302% ( 883) 00:14:34.316 3.444 - 3.459: 82.9757% ( 541) 00:14:34.316 3.459 - 3.474: 85.0597% ( 337) 00:14:34.316 3.474 - 3.490: 86.2779% ( 197) 00:14:34.316 3.490 - 3.505: 87.0200% ( 120) 00:14:34.316 3.505 - 3.520: 87.5765% ( 90) 00:14:34.316 3.520 - 3.535: 88.3928% ( 132) 00:14:34.316 3.535 - 3.550: 89.1411% ( 121) 00:14:34.316 3.550 - 3.566: 89.9635% ( 133) 00:14:34.316 3.566 - 3.581: 91.0271% ( 172) 00:14:34.316 3.581 - 3.596: 91.9857% ( 155) 00:14:34.316 3.596 - 3.611: 92.8514% ( 140) 00:14:34.316 3.611 - 3.627: 93.5935% ( 120) 00:14:34.316 3.627 - 3.642: 94.4840% ( 144) 00:14:34.316 3.642 - 3.657: 95.4672% ( 159) 00:14:34.316 3.657 - 3.672: 96.3453% ( 142) 00:14:34.316 3.672 - 3.688: 97.0812% ( 119) 00:14:34.316 3.688 - 3.703: 97.6192% ( 87) 00:14:34.316 3.703 - 3.718: 98.1386% ( 84) 00:14:34.316 3.718 - 3.733: 98.4973% ( 58) 00:14:34.316 3.733 - 3.749: 98.8189% ( 52) 00:14:34.316 3.749 - 3.764: 99.0353% ( 35) 00:14:34.316 3.764 - 3.779: 99.2517% ( 35) 00:14:34.316 3.779 - 3.794: 99.3445% ( 15) 00:14:34.316 3.794 - 3.810: 99.4002% ( 9) 00:14:34.316 3.810 - 3.825: 99.4806% ( 13) 00:14:34.316 3.825 - 3.840: 99.5053% ( 4) 00:14:34.316 3.840 - 3.855: 99.5548% ( 8) 00:14:34.316 3.855 - 3.870: 99.5733% ( 3) 00:14:34.316 3.870 - 3.886: 99.5795% ( 1) 00:14:34.316 3.886 - 3.901: 99.5857% ( 1) 00:14:34.316 5.303 - 5.333: 99.5919% ( 1) 00:14:34.316 5.394 - 5.425: 99.5980% ( 1) 00:14:34.316 5.425 - 5.455: 99.6042% ( 1) 00:14:34.316 5.455 - 5.486: 99.6104% ( 1) 00:14:34.316 5.516 - 5.547: 99.6166% ( 1) 00:14:34.316 5.730 - 5.760: 99.6228% ( 1) 00:14:34.316 5.943 - 5.973: 99.6290% ( 1) 00:14:34.316 5.973 - 6.004: 99.6351% ( 1) 00:14:34.316 6.004 - 6.034: 99.6413% ( 1) 00:14:34.316 6.034 - 6.065: 99.6475% ( 1) 00:14:34.316 6.065 - 6.095: 99.6537% ( 1) 00:14:34.316 6.126 - 6.156: 99.6599% ( 1) 00:14:34.316 6.187 - 6.217: 99.6723% ( 2) 00:14:34.316 6.217 - 6.248: 99.6784% ( 1) 00:14:34.316 6.248 - 6.278: 99.6846% ( 1) 00:14:34.316 6.339 - 6.370: 99.6908% ( 1) 00:14:34.316 6.430 - 6.461: 99.6970% ( 1) 00:14:34.316 6.491 - 6.522: 99.7155% ( 3) 00:14:34.316 6.583 - 6.613: 99.7217% ( 1) 00:14:34.316 6.613 - 6.644: 99.7279% ( 1) 00:14:34.316 6.735 - 6.766: 99.7403% ( 2) 00:14:34.316 6.827 - 6.857: 99.7465% ( 1) 00:14:34.316 6.918 - 6.949: 99.7526% ( 1) 00:14:34.316 6.949 - 6.979: 99.7588% ( 1) 00:14:34.316 6.979 - 7.010: 99.7774% ( 3) 00:14:34.316 7.010 - 7.040: 99.7836% ( 1) 00:14:34.316 7.070 - 7.101: 99.8021% ( 3) 00:14:34.316 7.101 - 7.131: 99.8145% ( 2) 00:14:34.316 7.162 - 7.192: 99.8207% ( 1) 00:14:34.316 7.192 - 7.223: 99.8269% ( 1) 00:14:34.316 7.284 - 7.314: 99.8330% ( 1) 00:14:34.316 7.314 - 7.345: 99.8392% ( 1) 00:14:34.316 7.406 - 7.436: 99.8454% ( 1) 00:14:34.316 [2024-12-10 04:01:33.557061] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.316 7.802 - 7.863: 99.8516% ( 1) 00:14:34.316 7.863 - 7.924: 99.8578% ( 1) 00:14:34.316 7.985 - 8.046: 99.8640% ( 1) 00:14:34.316 8.107 - 8.168: 99.8701% ( 1) 00:14:34.316 8.168 - 8.229: 99.8763% ( 1) 00:14:34.316 8.411 - 8.472: 99.8825% ( 1) 00:14:34.316 8.533 - 8.594: 99.8887% ( 1) 00:14:34.316 8.655 - 8.716: 99.8949% ( 1) 00:14:34.316 8.716 - 8.777: 99.9011% ( 1) 00:14:34.316 8.777 - 8.838: 99.9072% ( 1) 00:14:34.316 12.678 - 12.739: 99.9134% ( 1) 00:14:34.316 18.895 - 19.017: 99.9196% ( 1) 00:14:34.316 154.088 - 155.063: 99.9258% ( 1) 00:14:34.316 3994.575 - 4025.783: 100.0000% ( 12) 00:14:34.316 00:14:34.316 Complete histogram 00:14:34.316 ================== 00:14:34.316 Range in us Cumulative Count 00:14:34.316 1.752 - 1.760: 0.0309% ( 5) 00:14:34.316 1.760 - 1.768: 1.4409% ( 228) 00:14:34.316 1.768 - 1.775: 13.6541% ( 1975) 00:14:34.316 1.775 - 1.783: 40.5788% ( 4354) 00:14:34.316 1.783 - 1.790: 63.0573% ( 3635) 00:14:34.316 1.790 - 1.798: 75.8889% ( 2075) 00:14:34.316 1.798 - 1.806: 84.0455% ( 1319) 00:14:34.316 1.806 - 1.813: 87.9970% ( 639) 00:14:34.316 1.813 - 1.821: 89.6605% ( 269) 00:14:34.316 1.821 - 1.829: 90.6685% ( 163) 00:14:34.316 1.829 - 1.836: 91.8682% ( 194) 00:14:34.316 1.836 - 1.844: 93.4945% ( 263) 00:14:34.316 1.844 - 1.851: 95.1518% ( 268) 00:14:34.316 1.851 - 1.859: 96.4381% ( 208) 00:14:34.316 1.859 - 1.867: 97.5141% ( 174) 00:14:34.316 1.867 - 1.874: 98.1757% ( 107) 00:14:34.316 1.874 - 1.882: 98.4540% ( 45) 00:14:34.316 1.882 - 1.890: 98.6519% ( 32) 00:14:34.316 1.890 - 1.897: 98.7818% ( 21) 00:14:34.316 1.897 - 1.905: 98.8869% ( 17) 00:14:34.316 1.905 - 1.912: 98.9858% ( 16) 00:14:34.316 1.912 - 1.920: 99.0848% ( 16) 00:14:34.316 1.920 - 1.928: 99.1590% ( 12) 00:14:34.316 1.928 - 1.935: 99.2146% ( 9) 00:14:34.316 1.935 - 1.943: 99.2765% ( 10) 00:14:34.316 1.943 - 1.950: 99.2950% ( 3) 00:14:34.316 1.950 - 1.966: 99.3074% ( 2) 00:14:34.316 1.981 - 1.996: 99.3198% ( 2) 00:14:34.316 2.027 - 2.042: 99.3260% ( 1) 00:14:34.316 2.042 - 2.057: 99.3321% ( 1) 00:14:34.316 2.270 - 2.286: 99.3383% ( 1) 00:14:34.316 2.545 - 2.560: 99.3445% ( 1) 00:14:34.316 3.886 - 3.901: 99.3507% ( 1) 00:14:34.316 4.571 - 4.602: 99.3631% ( 2) 00:14:34.316 4.663 - 4.693: 99.3754% ( 2) 00:14:34.316 4.724 - 4.754: 99.3816% ( 1) 00:14:34.316 4.815 - 4.846: 99.3940% ( 2) 00:14:34.316 4.846 - 4.876: 99.4002% ( 1) 00:14:34.316 4.937 - 4.968: 99.4125% ( 2) 00:14:34.316 5.059 - 5.090: 99.4187% ( 1) 00:14:34.316 5.181 - 5.211: 99.4249% ( 1) 00:14:34.316 5.211 - 5.242: 99.4311% ( 1) 00:14:34.316 5.272 - 5.303: 99.4373% ( 1) 00:14:34.316 5.303 - 5.333: 99.4434% ( 1) 00:14:34.316 5.425 - 5.455: 99.4496% ( 1) 00:14:34.316 5.516 - 5.547: 99.4558% ( 1) 00:14:34.316 5.547 - 5.577: 99.4620% ( 1) 00:14:34.316 6.095 - 6.126: 99.4682% ( 1) 00:14:34.316 6.278 - 6.309: 99.4744% ( 1) 00:14:34.316 9.813 - 9.874: 99.4806% ( 1) 00:14:34.316 38.522 - 38.766: 99.4867% ( 1) 00:14:34.316 2012.891 - 2028.495: 99.4929% ( 1) 00:14:34.316 3198.781 - 3214.385: 99.4991% ( 1) 00:14:34.316 3994.575 - 4025.783: 99.9876% ( 79) 00:14:34.316 5960.655 - 5991.863: 99.9938% ( 1) 00:14:34.316 5991.863 - 6023.070: 100.0000% ( 1) 00:14:34.316 00:14:34.316 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:34.316 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:34.316 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:34.316 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:34.316 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:34.575 [ 00:14:34.575 { 00:14:34.575 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:34.575 "subtype": "Discovery", 00:14:34.575 "listen_addresses": [], 00:14:34.575 "allow_any_host": true, 00:14:34.575 "hosts": [] 00:14:34.575 }, 00:14:34.575 { 00:14:34.575 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:34.575 "subtype": "NVMe", 00:14:34.575 "listen_addresses": [ 00:14:34.575 { 00:14:34.575 "trtype": "VFIOUSER", 00:14:34.575 "adrfam": "IPv4", 00:14:34.575 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:34.575 "trsvcid": "0" 00:14:34.575 } 00:14:34.575 ], 00:14:34.575 "allow_any_host": true, 00:14:34.575 "hosts": [], 00:14:34.575 "serial_number": "SPDK1", 00:14:34.575 "model_number": "SPDK bdev Controller", 00:14:34.575 "max_namespaces": 32, 00:14:34.575 "min_cntlid": 1, 00:14:34.575 "max_cntlid": 65519, 00:14:34.575 "namespaces": [ 00:14:34.575 { 00:14:34.575 "nsid": 1, 00:14:34.575 "bdev_name": "Malloc1", 00:14:34.575 "name": "Malloc1", 00:14:34.575 "nguid": "04A2342C8ABE42E68F02035B50DB16EB", 00:14:34.575 "uuid": "04a2342c-8abe-42e6-8f02-035b50db16eb" 00:14:34.575 } 00:14:34.575 ] 00:14:34.575 }, 00:14:34.575 { 00:14:34.575 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:34.575 "subtype": "NVMe", 00:14:34.575 "listen_addresses": [ 00:14:34.575 { 00:14:34.575 "trtype": "VFIOUSER", 00:14:34.575 "adrfam": "IPv4", 00:14:34.575 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:34.575 "trsvcid": "0" 00:14:34.575 } 00:14:34.575 ], 00:14:34.575 "allow_any_host": true, 00:14:34.575 "hosts": [], 00:14:34.575 "serial_number": "SPDK2", 00:14:34.575 "model_number": "SPDK bdev Controller", 00:14:34.575 "max_namespaces": 32, 00:14:34.575 "min_cntlid": 1, 00:14:34.575 "max_cntlid": 65519, 00:14:34.575 "namespaces": [ 00:14:34.575 { 00:14:34.575 "nsid": 1, 00:14:34.575 "bdev_name": "Malloc2", 00:14:34.575 "name": "Malloc2", 00:14:34.575 "nguid": "AB71AF31B1EF4C9BBC7126AFA5B84C3C", 00:14:34.575 "uuid": "ab71af31-b1ef-4c9b-bc71-26afa5b84c3c" 00:14:34.575 } 00:14:34.575 ] 00:14:34.575 } 00:14:34.575 ] 00:14:34.575 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:34.576 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=24325 00:14:34.576 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:34.576 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:34.576 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:34.576 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:34.576 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:34.576 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:34.576 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:34.576 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:34.834 [2024-12-10 04:01:33.962789] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.834 Malloc3 00:14:34.834 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:35.093 [2024-12-10 04:01:34.217672] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:35.093 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:35.093 Asynchronous Event Request test 00:14:35.093 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:35.093 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:35.093 Registering asynchronous event callbacks... 00:14:35.093 Starting namespace attribute notice tests for all controllers... 00:14:35.093 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:35.093 aer_cb - Changed Namespace 00:14:35.093 Cleaning up... 00:14:35.353 [ 00:14:35.353 { 00:14:35.353 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:35.353 "subtype": "Discovery", 00:14:35.353 "listen_addresses": [], 00:14:35.353 "allow_any_host": true, 00:14:35.353 "hosts": [] 00:14:35.353 }, 00:14:35.353 { 00:14:35.353 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:35.353 "subtype": "NVMe", 00:14:35.353 "listen_addresses": [ 00:14:35.353 { 00:14:35.353 "trtype": "VFIOUSER", 00:14:35.353 "adrfam": "IPv4", 00:14:35.353 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:35.353 "trsvcid": "0" 00:14:35.353 } 00:14:35.353 ], 00:14:35.353 "allow_any_host": true, 00:14:35.353 "hosts": [], 00:14:35.353 "serial_number": "SPDK1", 00:14:35.353 "model_number": "SPDK bdev Controller", 00:14:35.353 "max_namespaces": 32, 00:14:35.353 "min_cntlid": 1, 00:14:35.353 "max_cntlid": 65519, 00:14:35.353 "namespaces": [ 00:14:35.353 { 00:14:35.353 "nsid": 1, 00:14:35.353 "bdev_name": "Malloc1", 00:14:35.353 "name": "Malloc1", 00:14:35.353 "nguid": "04A2342C8ABE42E68F02035B50DB16EB", 00:14:35.353 "uuid": "04a2342c-8abe-42e6-8f02-035b50db16eb" 00:14:35.353 }, 00:14:35.353 { 00:14:35.353 "nsid": 2, 00:14:35.353 "bdev_name": "Malloc3", 00:14:35.353 "name": "Malloc3", 00:14:35.353 "nguid": "DC79662A8DE047AAB6FDAA8A22A35952", 00:14:35.353 "uuid": "dc79662a-8de0-47aa-b6fd-aa8a22a35952" 00:14:35.353 } 00:14:35.353 ] 00:14:35.353 }, 00:14:35.353 { 00:14:35.353 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:35.353 "subtype": "NVMe", 00:14:35.353 "listen_addresses": [ 00:14:35.353 { 00:14:35.353 "trtype": "VFIOUSER", 00:14:35.353 "adrfam": "IPv4", 00:14:35.353 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:35.353 "trsvcid": "0" 00:14:35.353 } 00:14:35.353 ], 00:14:35.353 "allow_any_host": true, 00:14:35.353 "hosts": [], 00:14:35.353 "serial_number": "SPDK2", 00:14:35.353 "model_number": "SPDK bdev Controller", 00:14:35.353 "max_namespaces": 32, 00:14:35.353 "min_cntlid": 1, 00:14:35.353 "max_cntlid": 65519, 00:14:35.353 "namespaces": [ 00:14:35.353 { 00:14:35.354 "nsid": 1, 00:14:35.354 "bdev_name": "Malloc2", 00:14:35.354 "name": "Malloc2", 00:14:35.354 "nguid": "AB71AF31B1EF4C9BBC7126AFA5B84C3C", 00:14:35.354 "uuid": "ab71af31-b1ef-4c9b-bc71-26afa5b84c3c" 00:14:35.354 } 00:14:35.354 ] 00:14:35.354 } 00:14:35.354 ] 00:14:35.354 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 24325 00:14:35.354 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:35.354 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:35.354 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:35.354 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:35.354 [2024-12-10 04:01:34.458026] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:35.354 [2024-12-10 04:01:34.458055] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid24443 ] 00:14:35.354 [2024-12-10 04:01:34.497113] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:35.354 [2024-12-10 04:01:34.501372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:35.354 [2024-12-10 04:01:34.501394] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f20a6843000 00:14:35.354 [2024-12-10 04:01:34.502369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.354 [2024-12-10 04:01:34.503372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.354 [2024-12-10 04:01:34.504378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.354 [2024-12-10 04:01:34.505385] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.354 [2024-12-10 04:01:34.506395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.354 [2024-12-10 04:01:34.507403] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.354 [2024-12-10 04:01:34.508417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:35.354 [2024-12-10 04:01:34.509421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:35.354 [2024-12-10 04:01:34.510436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:35.354 [2024-12-10 04:01:34.510448] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f20a6838000 00:14:35.354 [2024-12-10 04:01:34.511364] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:35.354 [2024-12-10 04:01:34.522726] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:35.354 [2024-12-10 04:01:34.522749] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:35.354 [2024-12-10 04:01:34.524807] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:35.354 [2024-12-10 04:01:34.524847] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:35.354 [2024-12-10 04:01:34.524918] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:35.354 [2024-12-10 04:01:34.524933] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:35.354 [2024-12-10 04:01:34.524938] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:35.354 [2024-12-10 04:01:34.525808] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:35.354 [2024-12-10 04:01:34.525817] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:35.354 [2024-12-10 04:01:34.525824] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:35.354 [2024-12-10 04:01:34.526818] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:35.354 [2024-12-10 04:01:34.526826] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:35.354 [2024-12-10 04:01:34.526833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:35.354 [2024-12-10 04:01:34.527826] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:35.354 [2024-12-10 04:01:34.527835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:35.354 [2024-12-10 04:01:34.528834] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:35.354 [2024-12-10 04:01:34.528842] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:35.354 [2024-12-10 04:01:34.528849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:35.354 [2024-12-10 04:01:34.528855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:35.354 [2024-12-10 04:01:34.528962] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:35.354 [2024-12-10 04:01:34.528967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:35.354 [2024-12-10 04:01:34.528971] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:35.354 [2024-12-10 04:01:34.529841] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:35.354 [2024-12-10 04:01:34.530848] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:35.354 [2024-12-10 04:01:34.531856] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:35.354 [2024-12-10 04:01:34.532858] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:35.354 [2024-12-10 04:01:34.536175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:35.354 [2024-12-10 04:01:34.536888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:35.354 [2024-12-10 04:01:34.536896] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:35.354 [2024-12-10 04:01:34.536901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:35.354 [2024-12-10 04:01:34.536918] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:35.354 [2024-12-10 04:01:34.536925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:35.354 [2024-12-10 04:01:34.536938] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.354 [2024-12-10 04:01:34.536942] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.354 [2024-12-10 04:01:34.536946] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.354 [2024-12-10 04:01:34.536956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.354 [2024-12-10 04:01:34.543179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:35.354 [2024-12-10 04:01:34.543196] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:35.354 [2024-12-10 04:01:34.543201] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:35.354 [2024-12-10 04:01:34.543205] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:35.354 [2024-12-10 04:01:34.543209] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:35.354 [2024-12-10 04:01:34.543214] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:35.354 [2024-12-10 04:01:34.543221] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:35.354 [2024-12-10 04:01:34.543225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:35.354 [2024-12-10 04:01:34.543232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:35.354 [2024-12-10 04:01:34.543241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:35.354 [2024-12-10 04:01:34.551173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:35.354 [2024-12-10 04:01:34.551184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.354 [2024-12-10 04:01:34.551192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.354 [2024-12-10 04:01:34.551199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.354 [2024-12-10 04:01:34.551206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.354 [2024-12-10 04:01:34.551210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:35.354 [2024-12-10 04:01:34.551221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:35.354 [2024-12-10 04:01:34.551229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:35.354 [2024-12-10 04:01:34.559173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.559180] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:35.355 [2024-12-10 04:01:34.559185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.559193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.559199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.559207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:35.355 [2024-12-10 04:01:34.567178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.567231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.567238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.567245] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:35.355 [2024-12-10 04:01:34.567249] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:35.355 [2024-12-10 04:01:34.567253] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.355 [2024-12-10 04:01:34.567258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:35.355 [2024-12-10 04:01:34.575171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.575184] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:35.355 [2024-12-10 04:01:34.575195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.575202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.575208] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.355 [2024-12-10 04:01:34.575212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.355 [2024-12-10 04:01:34.575215] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.355 [2024-12-10 04:01:34.575221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.355 [2024-12-10 04:01:34.583174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.583185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.583192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.583198] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:35.355 [2024-12-10 04:01:34.583202] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.355 [2024-12-10 04:01:34.583205] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.355 [2024-12-10 04:01:34.583211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.355 [2024-12-10 04:01:34.591171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.591182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.591189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.591195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.591201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.591205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.591210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.591215] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:35.355 [2024-12-10 04:01:34.591219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:35.355 [2024-12-10 04:01:34.591223] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:35.355 [2024-12-10 04:01:34.591239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:35.355 [2024-12-10 04:01:34.599173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.599185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:35.355 [2024-12-10 04:01:34.607173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.607184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:35.355 [2024-12-10 04:01:34.615173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.615186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:35.355 [2024-12-10 04:01:34.623172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.623187] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:35.355 [2024-12-10 04:01:34.623192] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:35.355 [2024-12-10 04:01:34.623195] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:35.355 [2024-12-10 04:01:34.623198] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:35.355 [2024-12-10 04:01:34.623200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:35.355 [2024-12-10 04:01:34.623206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:35.355 [2024-12-10 04:01:34.623213] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:35.355 [2024-12-10 04:01:34.623217] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:35.355 [2024-12-10 04:01:34.623220] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.355 [2024-12-10 04:01:34.623225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:35.355 [2024-12-10 04:01:34.623231] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:35.355 [2024-12-10 04:01:34.623235] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:35.355 [2024-12-10 04:01:34.623238] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.355 [2024-12-10 04:01:34.623243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:35.355 [2024-12-10 04:01:34.623250] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:35.355 [2024-12-10 04:01:34.623254] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:35.355 [2024-12-10 04:01:34.623257] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:35.355 [2024-12-10 04:01:34.623262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:35.355 [2024-12-10 04:01:34.631173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.631185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.631197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:35.355 [2024-12-10 04:01:34.631203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:35.355 ===================================================== 00:14:35.355 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:35.355 ===================================================== 00:14:35.355 Controller Capabilities/Features 00:14:35.355 ================================ 00:14:35.355 Vendor ID: 4e58 00:14:35.355 Subsystem Vendor ID: 4e58 00:14:35.355 Serial Number: SPDK2 00:14:35.355 Model Number: SPDK bdev Controller 00:14:35.355 Firmware Version: 25.01 00:14:35.355 Recommended Arb Burst: 6 00:14:35.355 IEEE OUI Identifier: 8d 6b 50 00:14:35.355 Multi-path I/O 00:14:35.355 May have multiple subsystem ports: Yes 00:14:35.355 May have multiple controllers: Yes 00:14:35.355 Associated with SR-IOV VF: No 00:14:35.355 Max Data Transfer Size: 131072 00:14:35.355 Max Number of Namespaces: 32 00:14:35.355 Max Number of I/O Queues: 127 00:14:35.355 NVMe Specification Version (VS): 1.3 00:14:35.355 NVMe Specification Version (Identify): 1.3 00:14:35.355 Maximum Queue Entries: 256 00:14:35.355 Contiguous Queues Required: Yes 00:14:35.355 Arbitration Mechanisms Supported 00:14:35.355 Weighted Round Robin: Not Supported 00:14:35.355 Vendor Specific: Not Supported 00:14:35.355 Reset Timeout: 15000 ms 00:14:35.355 Doorbell Stride: 4 bytes 00:14:35.355 NVM Subsystem Reset: Not Supported 00:14:35.355 Command Sets Supported 00:14:35.355 NVM Command Set: Supported 00:14:35.355 Boot Partition: Not Supported 00:14:35.355 Memory Page Size Minimum: 4096 bytes 00:14:35.355 Memory Page Size Maximum: 4096 bytes 00:14:35.355 Persistent Memory Region: Not Supported 00:14:35.355 Optional Asynchronous Events Supported 00:14:35.355 Namespace Attribute Notices: Supported 00:14:35.355 Firmware Activation Notices: Not Supported 00:14:35.355 ANA Change Notices: Not Supported 00:14:35.355 PLE Aggregate Log Change Notices: Not Supported 00:14:35.355 LBA Status Info Alert Notices: Not Supported 00:14:35.355 EGE Aggregate Log Change Notices: Not Supported 00:14:35.355 Normal NVM Subsystem Shutdown event: Not Supported 00:14:35.356 Zone Descriptor Change Notices: Not Supported 00:14:35.356 Discovery Log Change Notices: Not Supported 00:14:35.356 Controller Attributes 00:14:35.356 128-bit Host Identifier: Supported 00:14:35.356 Non-Operational Permissive Mode: Not Supported 00:14:35.356 NVM Sets: Not Supported 00:14:35.356 Read Recovery Levels: Not Supported 00:14:35.356 Endurance Groups: Not Supported 00:14:35.356 Predictable Latency Mode: Not Supported 00:14:35.356 Traffic Based Keep ALive: Not Supported 00:14:35.356 Namespace Granularity: Not Supported 00:14:35.356 SQ Associations: Not Supported 00:14:35.356 UUID List: Not Supported 00:14:35.356 Multi-Domain Subsystem: Not Supported 00:14:35.356 Fixed Capacity Management: Not Supported 00:14:35.356 Variable Capacity Management: Not Supported 00:14:35.356 Delete Endurance Group: Not Supported 00:14:35.356 Delete NVM Set: Not Supported 00:14:35.356 Extended LBA Formats Supported: Not Supported 00:14:35.356 Flexible Data Placement Supported: Not Supported 00:14:35.356 00:14:35.356 Controller Memory Buffer Support 00:14:35.356 ================================ 00:14:35.356 Supported: No 00:14:35.356 00:14:35.356 Persistent Memory Region Support 00:14:35.356 ================================ 00:14:35.356 Supported: No 00:14:35.356 00:14:35.356 Admin Command Set Attributes 00:14:35.356 ============================ 00:14:35.356 Security Send/Receive: Not Supported 00:14:35.356 Format NVM: Not Supported 00:14:35.356 Firmware Activate/Download: Not Supported 00:14:35.356 Namespace Management: Not Supported 00:14:35.356 Device Self-Test: Not Supported 00:14:35.356 Directives: Not Supported 00:14:35.356 NVMe-MI: Not Supported 00:14:35.356 Virtualization Management: Not Supported 00:14:35.356 Doorbell Buffer Config: Not Supported 00:14:35.356 Get LBA Status Capability: Not Supported 00:14:35.356 Command & Feature Lockdown Capability: Not Supported 00:14:35.356 Abort Command Limit: 4 00:14:35.356 Async Event Request Limit: 4 00:14:35.356 Number of Firmware Slots: N/A 00:14:35.356 Firmware Slot 1 Read-Only: N/A 00:14:35.356 Firmware Activation Without Reset: N/A 00:14:35.356 Multiple Update Detection Support: N/A 00:14:35.356 Firmware Update Granularity: No Information Provided 00:14:35.356 Per-Namespace SMART Log: No 00:14:35.356 Asymmetric Namespace Access Log Page: Not Supported 00:14:35.356 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:35.356 Command Effects Log Page: Supported 00:14:35.356 Get Log Page Extended Data: Supported 00:14:35.356 Telemetry Log Pages: Not Supported 00:14:35.356 Persistent Event Log Pages: Not Supported 00:14:35.356 Supported Log Pages Log Page: May Support 00:14:35.356 Commands Supported & Effects Log Page: Not Supported 00:14:35.356 Feature Identifiers & Effects Log Page:May Support 00:14:35.356 NVMe-MI Commands & Effects Log Page: May Support 00:14:35.356 Data Area 4 for Telemetry Log: Not Supported 00:14:35.356 Error Log Page Entries Supported: 128 00:14:35.356 Keep Alive: Supported 00:14:35.356 Keep Alive Granularity: 10000 ms 00:14:35.356 00:14:35.356 NVM Command Set Attributes 00:14:35.356 ========================== 00:14:35.356 Submission Queue Entry Size 00:14:35.356 Max: 64 00:14:35.356 Min: 64 00:14:35.356 Completion Queue Entry Size 00:14:35.356 Max: 16 00:14:35.356 Min: 16 00:14:35.356 Number of Namespaces: 32 00:14:35.356 Compare Command: Supported 00:14:35.356 Write Uncorrectable Command: Not Supported 00:14:35.356 Dataset Management Command: Supported 00:14:35.356 Write Zeroes Command: Supported 00:14:35.356 Set Features Save Field: Not Supported 00:14:35.356 Reservations: Not Supported 00:14:35.356 Timestamp: Not Supported 00:14:35.356 Copy: Supported 00:14:35.356 Volatile Write Cache: Present 00:14:35.356 Atomic Write Unit (Normal): 1 00:14:35.356 Atomic Write Unit (PFail): 1 00:14:35.356 Atomic Compare & Write Unit: 1 00:14:35.356 Fused Compare & Write: Supported 00:14:35.356 Scatter-Gather List 00:14:35.356 SGL Command Set: Supported (Dword aligned) 00:14:35.356 SGL Keyed: Not Supported 00:14:35.356 SGL Bit Bucket Descriptor: Not Supported 00:14:35.356 SGL Metadata Pointer: Not Supported 00:14:35.356 Oversized SGL: Not Supported 00:14:35.356 SGL Metadata Address: Not Supported 00:14:35.356 SGL Offset: Not Supported 00:14:35.356 Transport SGL Data Block: Not Supported 00:14:35.356 Replay Protected Memory Block: Not Supported 00:14:35.356 00:14:35.356 Firmware Slot Information 00:14:35.356 ========================= 00:14:35.356 Active slot: 1 00:14:35.356 Slot 1 Firmware Revision: 25.01 00:14:35.356 00:14:35.356 00:14:35.356 Commands Supported and Effects 00:14:35.356 ============================== 00:14:35.356 Admin Commands 00:14:35.356 -------------- 00:14:35.356 Get Log Page (02h): Supported 00:14:35.356 Identify (06h): Supported 00:14:35.356 Abort (08h): Supported 00:14:35.356 Set Features (09h): Supported 00:14:35.356 Get Features (0Ah): Supported 00:14:35.356 Asynchronous Event Request (0Ch): Supported 00:14:35.356 Keep Alive (18h): Supported 00:14:35.356 I/O Commands 00:14:35.356 ------------ 00:14:35.356 Flush (00h): Supported LBA-Change 00:14:35.356 Write (01h): Supported LBA-Change 00:14:35.356 Read (02h): Supported 00:14:35.356 Compare (05h): Supported 00:14:35.356 Write Zeroes (08h): Supported LBA-Change 00:14:35.356 Dataset Management (09h): Supported LBA-Change 00:14:35.356 Copy (19h): Supported LBA-Change 00:14:35.356 00:14:35.356 Error Log 00:14:35.356 ========= 00:14:35.356 00:14:35.356 Arbitration 00:14:35.356 =========== 00:14:35.356 Arbitration Burst: 1 00:14:35.356 00:14:35.356 Power Management 00:14:35.356 ================ 00:14:35.356 Number of Power States: 1 00:14:35.356 Current Power State: Power State #0 00:14:35.356 Power State #0: 00:14:35.356 Max Power: 0.00 W 00:14:35.356 Non-Operational State: Operational 00:14:35.356 Entry Latency: Not Reported 00:14:35.356 Exit Latency: Not Reported 00:14:35.356 Relative Read Throughput: 0 00:14:35.356 Relative Read Latency: 0 00:14:35.356 Relative Write Throughput: 0 00:14:35.356 Relative Write Latency: 0 00:14:35.356 Idle Power: Not Reported 00:14:35.356 Active Power: Not Reported 00:14:35.356 Non-Operational Permissive Mode: Not Supported 00:14:35.356 00:14:35.356 Health Information 00:14:35.356 ================== 00:14:35.356 Critical Warnings: 00:14:35.356 Available Spare Space: OK 00:14:35.356 Temperature: OK 00:14:35.356 Device Reliability: OK 00:14:35.356 Read Only: No 00:14:35.356 Volatile Memory Backup: OK 00:14:35.356 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:35.356 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:35.356 Available Spare: 0% 00:14:35.356 Available Sp[2024-12-10 04:01:34.631288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:35.617 [2024-12-10 04:01:34.639173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:35.617 [2024-12-10 04:01:34.639203] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:35.617 [2024-12-10 04:01:34.639212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.617 [2024-12-10 04:01:34.639218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.617 [2024-12-10 04:01:34.639223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.617 [2024-12-10 04:01:34.639228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.617 [2024-12-10 04:01:34.639276] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:35.617 [2024-12-10 04:01:34.639287] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:35.617 [2024-12-10 04:01:34.640286] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:35.617 [2024-12-10 04:01:34.640329] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:35.617 [2024-12-10 04:01:34.640335] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:35.617 [2024-12-10 04:01:34.641288] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:35.617 [2024-12-10 04:01:34.641299] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:35.617 [2024-12-10 04:01:34.641344] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:35.617 [2024-12-10 04:01:34.642305] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:35.617 are Threshold: 0% 00:14:35.617 Life Percentage Used: 0% 00:14:35.617 Data Units Read: 0 00:14:35.617 Data Units Written: 0 00:14:35.617 Host Read Commands: 0 00:14:35.617 Host Write Commands: 0 00:14:35.617 Controller Busy Time: 0 minutes 00:14:35.617 Power Cycles: 0 00:14:35.617 Power On Hours: 0 hours 00:14:35.617 Unsafe Shutdowns: 0 00:14:35.617 Unrecoverable Media Errors: 0 00:14:35.617 Lifetime Error Log Entries: 0 00:14:35.617 Warning Temperature Time: 0 minutes 00:14:35.617 Critical Temperature Time: 0 minutes 00:14:35.617 00:14:35.617 Number of Queues 00:14:35.617 ================ 00:14:35.617 Number of I/O Submission Queues: 127 00:14:35.617 Number of I/O Completion Queues: 127 00:14:35.617 00:14:35.617 Active Namespaces 00:14:35.617 ================= 00:14:35.617 Namespace ID:1 00:14:35.617 Error Recovery Timeout: Unlimited 00:14:35.617 Command Set Identifier: NVM (00h) 00:14:35.617 Deallocate: Supported 00:14:35.617 Deallocated/Unwritten Error: Not Supported 00:14:35.617 Deallocated Read Value: Unknown 00:14:35.617 Deallocate in Write Zeroes: Not Supported 00:14:35.617 Deallocated Guard Field: 0xFFFF 00:14:35.617 Flush: Supported 00:14:35.617 Reservation: Supported 00:14:35.618 Namespace Sharing Capabilities: Multiple Controllers 00:14:35.618 Size (in LBAs): 131072 (0GiB) 00:14:35.618 Capacity (in LBAs): 131072 (0GiB) 00:14:35.618 Utilization (in LBAs): 131072 (0GiB) 00:14:35.618 NGUID: AB71AF31B1EF4C9BBC7126AFA5B84C3C 00:14:35.618 UUID: ab71af31-b1ef-4c9b-bc71-26afa5b84c3c 00:14:35.618 Thin Provisioning: Not Supported 00:14:35.618 Per-NS Atomic Units: Yes 00:14:35.618 Atomic Boundary Size (Normal): 0 00:14:35.618 Atomic Boundary Size (PFail): 0 00:14:35.618 Atomic Boundary Offset: 0 00:14:35.618 Maximum Single Source Range Length: 65535 00:14:35.618 Maximum Copy Length: 65535 00:14:35.618 Maximum Source Range Count: 1 00:14:35.618 NGUID/EUI64 Never Reused: No 00:14:35.618 Namespace Write Protected: No 00:14:35.618 Number of LBA Formats: 1 00:14:35.618 Current LBA Format: LBA Format #00 00:14:35.618 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:35.618 00:14:35.618 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:35.618 [2024-12-10 04:01:34.863370] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:40.889 Initializing NVMe Controllers 00:14:40.889 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:40.889 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:40.889 Initialization complete. Launching workers. 00:14:40.889 ======================================================== 00:14:40.889 Latency(us) 00:14:40.889 Device Information : IOPS MiB/s Average min max 00:14:40.889 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39943.80 156.03 3205.99 981.41 7600.76 00:14:40.889 ======================================================== 00:14:40.889 Total : 39943.80 156.03 3205.99 981.41 7600.76 00:14:40.889 00:14:40.889 [2024-12-10 04:01:39.971409] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:40.889 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:41.147 [2024-12-10 04:01:40.210186] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:46.418 Initializing NVMe Controllers 00:14:46.418 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:46.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:46.418 Initialization complete. Launching workers. 00:14:46.418 ======================================================== 00:14:46.418 Latency(us) 00:14:46.418 Device Information : IOPS MiB/s Average min max 00:14:46.418 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39908.56 155.89 3206.91 990.68 9242.62 00:14:46.418 ======================================================== 00:14:46.418 Total : 39908.56 155.89 3206.91 990.68 9242.62 00:14:46.418 00:14:46.418 [2024-12-10 04:01:45.228685] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:46.418 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:46.418 [2024-12-10 04:01:45.438892] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:51.691 [2024-12-10 04:01:50.569268] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:51.692 Initializing NVMe Controllers 00:14:51.692 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:51.692 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:51.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:51.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:51.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:51.692 Initialization complete. Launching workers. 00:14:51.692 Starting thread on core 2 00:14:51.692 Starting thread on core 3 00:14:51.692 Starting thread on core 1 00:14:51.692 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:51.692 [2024-12-10 04:01:50.865575] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.998 [2024-12-10 04:01:53.925404] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.998 Initializing NVMe Controllers 00:14:54.998 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.998 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.998 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:54.998 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:54.998 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:54.998 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:54.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:54.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:54.998 Initialization complete. Launching workers. 00:14:54.998 Starting thread on core 1 with urgent priority queue 00:14:54.998 Starting thread on core 2 with urgent priority queue 00:14:54.998 Starting thread on core 3 with urgent priority queue 00:14:54.998 Starting thread on core 0 with urgent priority queue 00:14:54.998 SPDK bdev Controller (SPDK2 ) core 0: 7838.33 IO/s 12.76 secs/100000 ios 00:14:54.998 SPDK bdev Controller (SPDK2 ) core 1: 8796.67 IO/s 11.37 secs/100000 ios 00:14:54.998 SPDK bdev Controller (SPDK2 ) core 2: 7723.33 IO/s 12.95 secs/100000 ios 00:14:54.998 SPDK bdev Controller (SPDK2 ) core 3: 9439.00 IO/s 10.59 secs/100000 ios 00:14:54.998 ======================================================== 00:14:54.998 00:14:54.998 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:54.998 [2024-12-10 04:01:54.216616] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.998 Initializing NVMe Controllers 00:14:54.998 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.998 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.998 Namespace ID: 1 size: 0GB 00:14:54.998 Initialization complete. 00:14:54.998 INFO: using host memory buffer for IO 00:14:54.998 Hello world! 00:14:54.998 [2024-12-10 04:01:54.226691] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.998 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:55.257 [2024-12-10 04:01:54.502281] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:56.638 Initializing NVMe Controllers 00:14:56.638 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.638 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.638 Initialization complete. Launching workers. 00:14:56.638 submit (in ns) avg, min, max = 6261.8, 3184.8, 4003436.2 00:14:56.638 complete (in ns) avg, min, max = 19859.0, 1754.3, 5994089.5 00:14:56.638 00:14:56.638 Submit histogram 00:14:56.638 ================ 00:14:56.638 Range in us Cumulative Count 00:14:56.638 3.185 - 3.200: 0.0871% ( 14) 00:14:56.638 3.200 - 3.215: 0.6721% ( 94) 00:14:56.638 3.215 - 3.230: 2.0101% ( 215) 00:14:56.638 3.230 - 3.246: 4.1197% ( 339) 00:14:56.638 3.246 - 3.261: 7.8163% ( 594) 00:14:56.638 3.261 - 3.276: 13.5354% ( 919) 00:14:56.638 3.276 - 3.291: 19.5781% ( 971) 00:14:56.638 3.291 - 3.307: 25.7390% ( 990) 00:14:56.638 3.307 - 3.322: 33.2006% ( 1199) 00:14:56.638 3.322 - 3.337: 39.2184% ( 967) 00:14:56.638 3.337 - 3.352: 44.5641% ( 859) 00:14:56.638 3.352 - 3.368: 49.0696% ( 724) 00:14:56.638 3.368 - 3.383: 54.1664% ( 819) 00:14:56.638 3.383 - 3.398: 58.7902% ( 743) 00:14:56.638 3.398 - 3.413: 63.7501% ( 797) 00:14:56.638 3.413 - 3.429: 70.1973% ( 1036) 00:14:56.638 3.429 - 3.444: 75.1011% ( 788) 00:14:56.638 3.444 - 3.459: 79.2084% ( 660) 00:14:56.638 3.459 - 3.474: 82.6560% ( 554) 00:14:56.638 3.474 - 3.490: 85.1951% ( 408) 00:14:56.638 3.490 - 3.505: 86.6949% ( 241) 00:14:56.638 3.505 - 3.520: 87.4666% ( 124) 00:14:56.638 3.520 - 3.535: 87.9955% ( 85) 00:14:56.638 3.535 - 3.550: 88.5307% ( 86) 00:14:56.638 3.550 - 3.566: 89.2153% ( 110) 00:14:56.638 3.566 - 3.581: 90.1052% ( 143) 00:14:56.638 3.581 - 3.596: 90.9826% ( 141) 00:14:56.638 3.596 - 3.611: 91.9286% ( 152) 00:14:56.638 3.611 - 3.627: 92.8620% ( 150) 00:14:56.638 3.627 - 3.642: 93.7519% ( 143) 00:14:56.638 3.642 - 3.657: 94.7041% ( 153) 00:14:56.638 3.657 - 3.672: 95.5318% ( 133) 00:14:56.638 3.672 - 3.688: 96.3283% ( 128) 00:14:56.638 3.688 - 3.703: 97.1373% ( 130) 00:14:56.638 3.703 - 3.718: 97.7410% ( 97) 00:14:56.638 3.718 - 3.733: 98.2824% ( 87) 00:14:56.638 3.733 - 3.749: 98.6620% ( 61) 00:14:56.638 3.749 - 3.764: 98.9918% ( 53) 00:14:56.638 3.764 - 3.779: 99.2034% ( 34) 00:14:56.638 3.779 - 3.794: 99.3466% ( 23) 00:14:56.638 3.794 - 3.810: 99.4959% ( 24) 00:14:56.638 3.810 - 3.825: 99.5582% ( 10) 00:14:56.638 3.825 - 3.840: 99.5830% ( 4) 00:14:56.638 3.840 - 3.855: 99.5893% ( 1) 00:14:56.638 3.855 - 3.870: 99.5955% ( 1) 00:14:56.638 3.870 - 3.886: 99.6017% ( 1) 00:14:56.638 3.886 - 3.901: 99.6142% ( 2) 00:14:56.638 3.962 - 3.992: 99.6266% ( 2) 00:14:56.638 5.272 - 5.303: 99.6328% ( 1) 00:14:56.638 5.303 - 5.333: 99.6391% ( 1) 00:14:56.638 5.333 - 5.364: 99.6453% ( 1) 00:14:56.638 5.394 - 5.425: 99.6515% ( 1) 00:14:56.638 5.455 - 5.486: 99.6577% ( 1) 00:14:56.638 5.516 - 5.547: 99.6639% ( 1) 00:14:56.638 5.547 - 5.577: 99.6764% ( 2) 00:14:56.638 5.699 - 5.730: 99.6826% ( 1) 00:14:56.638 5.730 - 5.760: 99.6888% ( 1) 00:14:56.638 5.760 - 5.790: 99.7013% ( 2) 00:14:56.638 5.943 - 5.973: 99.7075% ( 1) 00:14:56.638 6.034 - 6.065: 99.7137% ( 1) 00:14:56.638 6.065 - 6.095: 99.7200% ( 1) 00:14:56.638 6.095 - 6.126: 99.7262% ( 1) 00:14:56.638 6.126 - 6.156: 99.7324% ( 1) 00:14:56.638 6.156 - 6.187: 99.7386% ( 1) 00:14:56.638 6.217 - 6.248: 99.7449% ( 1) 00:14:56.638 6.248 - 6.278: 99.7511% ( 1) 00:14:56.638 6.522 - 6.552: 99.7573% ( 1) 00:14:56.638 6.552 - 6.583: 99.7635% ( 1) 00:14:56.638 6.583 - 6.613: 99.7760% ( 2) 00:14:56.638 6.644 - 6.674: 99.7822% ( 1) 00:14:56.638 6.766 - 6.796: 99.7884% ( 1) 00:14:56.638 6.796 - 6.827: 99.8009% ( 2) 00:14:56.638 6.857 - 6.888: 99.8071% ( 1) 00:14:56.638 6.949 - 6.979: 99.8133% ( 1) 00:14:56.638 7.040 - 7.070: 99.8195% ( 1) 00:14:56.638 7.070 - 7.101: 99.8320% ( 2) 00:14:56.638 7.101 - 7.131: 99.8382% ( 1) 00:14:56.638 7.162 - 7.192: 99.8444% ( 1) 00:14:56.638 7.192 - 7.223: 99.8506% ( 1) 00:14:56.638 [2024-12-10 04:01:55.596138] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:56.638 7.375 - 7.406: 99.8569% ( 1) 00:14:56.638 7.406 - 7.436: 99.8631% ( 1) 00:14:56.638 7.771 - 7.802: 99.8693% ( 1) 00:14:56.638 7.863 - 7.924: 99.8818% ( 2) 00:14:56.638 7.985 - 8.046: 99.8880% ( 1) 00:14:56.638 8.229 - 8.290: 99.9004% ( 2) 00:14:56.638 8.350 - 8.411: 99.9067% ( 1) 00:14:56.638 8.533 - 8.594: 99.9129% ( 1) 00:14:56.638 8.716 - 8.777: 99.9191% ( 1) 00:14:56.638 9.935 - 9.996: 99.9253% ( 1) 00:14:56.638 2012.891 - 2028.495: 99.9315% ( 1) 00:14:56.638 3994.575 - 4025.783: 100.0000% ( 11) 00:14:56.638 00:14:56.638 Complete histogram 00:14:56.638 ================== 00:14:56.638 Range in us Cumulative Count 00:14:56.638 1.752 - 1.760: 0.0871% ( 14) 00:14:56.638 1.760 - 1.768: 1.8545% ( 284) 00:14:56.638 1.768 - 1.775: 9.5526% ( 1237) 00:14:56.638 1.775 - 1.783: 19.4412% ( 1589) 00:14:56.638 1.783 - 1.790: 24.3761% ( 793) 00:14:56.638 1.790 - 1.798: 26.2804% ( 306) 00:14:56.638 1.798 - 1.806: 28.3527% ( 333) 00:14:56.638 1.806 - 1.813: 36.9158% ( 1376) 00:14:56.638 1.813 - 1.821: 60.7816% ( 3835) 00:14:56.638 1.821 - 1.829: 82.2266% ( 3446) 00:14:56.638 1.829 - 1.836: 90.6839% ( 1359) 00:14:56.638 1.836 - 1.844: 93.7706% ( 496) 00:14:56.638 1.844 - 1.851: 95.6127% ( 296) 00:14:56.638 1.851 - 1.859: 96.2972% ( 110) 00:14:56.638 1.859 - 1.867: 96.7453% ( 72) 00:14:56.638 1.867 - 1.874: 97.0627% ( 51) 00:14:56.638 1.874 - 1.882: 97.4423% ( 61) 00:14:56.639 1.882 - 1.890: 97.9339% ( 79) 00:14:56.639 1.890 - 1.897: 98.4255% ( 79) 00:14:56.639 1.897 - 1.905: 98.8736% ( 72) 00:14:56.639 1.905 - 1.912: 99.1039% ( 37) 00:14:56.639 1.912 - 1.920: 99.2159% ( 18) 00:14:56.639 1.920 - 1.928: 99.2594% ( 7) 00:14:56.639 1.928 - 1.935: 99.2781% ( 3) 00:14:56.639 1.935 - 1.943: 99.2843% ( 1) 00:14:56.639 1.943 - 1.950: 99.2968% ( 2) 00:14:56.639 1.950 - 1.966: 99.3217% ( 4) 00:14:56.639 1.966 - 1.981: 99.3279% ( 1) 00:14:56.639 1.981 - 1.996: 99.3403% ( 2) 00:14:56.639 1.996 - 2.011: 99.3466% ( 1) 00:14:56.639 2.011 - 2.027: 99.3590% ( 2) 00:14:56.639 2.057 - 2.072: 99.3652% ( 1) 00:14:56.639 2.088 - 2.103: 99.3715% ( 1) 00:14:56.639 2.331 - 2.347: 99.3777% ( 1) 00:14:56.639 3.703 - 3.718: 99.3839% ( 1) 00:14:56.639 3.749 - 3.764: 99.3901% ( 1) 00:14:56.639 3.992 - 4.023: 99.3964% ( 1) 00:14:56.639 4.114 - 4.145: 99.4026% ( 1) 00:14:56.639 4.145 - 4.175: 99.4088% ( 1) 00:14:56.639 4.724 - 4.754: 99.4150% ( 1) 00:14:56.639 4.815 - 4.846: 99.4212% ( 1) 00:14:56.639 4.876 - 4.907: 99.4275% ( 1) 00:14:56.639 4.968 - 4.998: 99.4337% ( 1) 00:14:56.639 5.059 - 5.090: 99.4399% ( 1) 00:14:56.639 5.090 - 5.120: 99.4461% ( 1) 00:14:56.639 5.211 - 5.242: 99.4524% ( 1) 00:14:56.639 5.242 - 5.272: 99.4586% ( 1) 00:14:56.639 5.333 - 5.364: 99.4648% ( 1) 00:14:56.639 5.455 - 5.486: 99.4710% ( 1) 00:14:56.639 5.577 - 5.608: 99.4773% ( 1) 00:14:56.639 5.638 - 5.669: 99.4835% ( 1) 00:14:56.639 5.730 - 5.760: 99.4897% ( 1) 00:14:56.639 5.760 - 5.790: 99.4959% ( 1) 00:14:56.639 5.882 - 5.912: 99.5021% ( 1) 00:14:56.639 5.943 - 5.973: 99.5084% ( 1) 00:14:56.639 6.126 - 6.156: 99.5146% ( 1) 00:14:56.639 6.796 - 6.827: 99.5208% ( 1) 00:14:56.639 6.888 - 6.918: 99.5270% ( 1) 00:14:56.639 7.345 - 7.375: 99.5333% ( 1) 00:14:56.639 7.771 - 7.802: 99.5395% ( 1) 00:14:56.639 10.545 - 10.606: 99.5457% ( 1) 00:14:56.639 1997.288 - 2012.891: 99.5519% ( 1) 00:14:56.639 2012.891 - 2028.495: 99.5706% ( 3) 00:14:56.639 2028.495 - 2044.099: 99.5768% ( 1) 00:14:56.639 2168.930 - 2184.533: 99.5830% ( 1) 00:14:56.639 3994.575 - 4025.783: 99.9689% ( 62) 00:14:56.639 5991.863 - 6023.070: 100.0000% ( 5) 00:14:56.639 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:56.639 [ 00:14:56.639 { 00:14:56.639 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:56.639 "subtype": "Discovery", 00:14:56.639 "listen_addresses": [], 00:14:56.639 "allow_any_host": true, 00:14:56.639 "hosts": [] 00:14:56.639 }, 00:14:56.639 { 00:14:56.639 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:56.639 "subtype": "NVMe", 00:14:56.639 "listen_addresses": [ 00:14:56.639 { 00:14:56.639 "trtype": "VFIOUSER", 00:14:56.639 "adrfam": "IPv4", 00:14:56.639 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:56.639 "trsvcid": "0" 00:14:56.639 } 00:14:56.639 ], 00:14:56.639 "allow_any_host": true, 00:14:56.639 "hosts": [], 00:14:56.639 "serial_number": "SPDK1", 00:14:56.639 "model_number": "SPDK bdev Controller", 00:14:56.639 "max_namespaces": 32, 00:14:56.639 "min_cntlid": 1, 00:14:56.639 "max_cntlid": 65519, 00:14:56.639 "namespaces": [ 00:14:56.639 { 00:14:56.639 "nsid": 1, 00:14:56.639 "bdev_name": "Malloc1", 00:14:56.639 "name": "Malloc1", 00:14:56.639 "nguid": "04A2342C8ABE42E68F02035B50DB16EB", 00:14:56.639 "uuid": "04a2342c-8abe-42e6-8f02-035b50db16eb" 00:14:56.639 }, 00:14:56.639 { 00:14:56.639 "nsid": 2, 00:14:56.639 "bdev_name": "Malloc3", 00:14:56.639 "name": "Malloc3", 00:14:56.639 "nguid": "DC79662A8DE047AAB6FDAA8A22A35952", 00:14:56.639 "uuid": "dc79662a-8de0-47aa-b6fd-aa8a22a35952" 00:14:56.639 } 00:14:56.639 ] 00:14:56.639 }, 00:14:56.639 { 00:14:56.639 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:56.639 "subtype": "NVMe", 00:14:56.639 "listen_addresses": [ 00:14:56.639 { 00:14:56.639 "trtype": "VFIOUSER", 00:14:56.639 "adrfam": "IPv4", 00:14:56.639 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:56.639 "trsvcid": "0" 00:14:56.639 } 00:14:56.639 ], 00:14:56.639 "allow_any_host": true, 00:14:56.639 "hosts": [], 00:14:56.639 "serial_number": "SPDK2", 00:14:56.639 "model_number": "SPDK bdev Controller", 00:14:56.639 "max_namespaces": 32, 00:14:56.639 "min_cntlid": 1, 00:14:56.639 "max_cntlid": 65519, 00:14:56.639 "namespaces": [ 00:14:56.639 { 00:14:56.639 "nsid": 1, 00:14:56.639 "bdev_name": "Malloc2", 00:14:56.639 "name": "Malloc2", 00:14:56.639 "nguid": "AB71AF31B1EF4C9BBC7126AFA5B84C3C", 00:14:56.639 "uuid": "ab71af31-b1ef-4c9b-bc71-26afa5b84c3c" 00:14:56.639 } 00:14:56.639 ] 00:14:56.639 } 00:14:56.639 ] 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=27906 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:56.639 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:56.898 [2024-12-10 04:01:56.006558] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:56.898 Malloc4 00:14:56.899 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:57.157 [2024-12-10 04:01:56.264551] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:57.157 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:57.157 Asynchronous Event Request test 00:14:57.157 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:57.157 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:57.157 Registering asynchronous event callbacks... 00:14:57.157 Starting namespace attribute notice tests for all controllers... 00:14:57.157 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:57.157 aer_cb - Changed Namespace 00:14:57.157 Cleaning up... 00:14:57.417 [ 00:14:57.417 { 00:14:57.417 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:57.417 "subtype": "Discovery", 00:14:57.417 "listen_addresses": [], 00:14:57.417 "allow_any_host": true, 00:14:57.417 "hosts": [] 00:14:57.417 }, 00:14:57.417 { 00:14:57.417 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:57.417 "subtype": "NVMe", 00:14:57.417 "listen_addresses": [ 00:14:57.417 { 00:14:57.417 "trtype": "VFIOUSER", 00:14:57.417 "adrfam": "IPv4", 00:14:57.417 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:57.417 "trsvcid": "0" 00:14:57.417 } 00:14:57.417 ], 00:14:57.417 "allow_any_host": true, 00:14:57.417 "hosts": [], 00:14:57.417 "serial_number": "SPDK1", 00:14:57.417 "model_number": "SPDK bdev Controller", 00:14:57.417 "max_namespaces": 32, 00:14:57.417 "min_cntlid": 1, 00:14:57.417 "max_cntlid": 65519, 00:14:57.417 "namespaces": [ 00:14:57.417 { 00:14:57.417 "nsid": 1, 00:14:57.417 "bdev_name": "Malloc1", 00:14:57.417 "name": "Malloc1", 00:14:57.417 "nguid": "04A2342C8ABE42E68F02035B50DB16EB", 00:14:57.417 "uuid": "04a2342c-8abe-42e6-8f02-035b50db16eb" 00:14:57.417 }, 00:14:57.417 { 00:14:57.417 "nsid": 2, 00:14:57.417 "bdev_name": "Malloc3", 00:14:57.417 "name": "Malloc3", 00:14:57.417 "nguid": "DC79662A8DE047AAB6FDAA8A22A35952", 00:14:57.417 "uuid": "dc79662a-8de0-47aa-b6fd-aa8a22a35952" 00:14:57.417 } 00:14:57.417 ] 00:14:57.417 }, 00:14:57.417 { 00:14:57.417 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:57.417 "subtype": "NVMe", 00:14:57.417 "listen_addresses": [ 00:14:57.417 { 00:14:57.417 "trtype": "VFIOUSER", 00:14:57.417 "adrfam": "IPv4", 00:14:57.417 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:57.417 "trsvcid": "0" 00:14:57.417 } 00:14:57.417 ], 00:14:57.417 "allow_any_host": true, 00:14:57.417 "hosts": [], 00:14:57.417 "serial_number": "SPDK2", 00:14:57.417 "model_number": "SPDK bdev Controller", 00:14:57.417 "max_namespaces": 32, 00:14:57.417 "min_cntlid": 1, 00:14:57.417 "max_cntlid": 65519, 00:14:57.417 "namespaces": [ 00:14:57.417 { 00:14:57.417 "nsid": 1, 00:14:57.417 "bdev_name": "Malloc2", 00:14:57.417 "name": "Malloc2", 00:14:57.417 "nguid": "AB71AF31B1EF4C9BBC7126AFA5B84C3C", 00:14:57.417 "uuid": "ab71af31-b1ef-4c9b-bc71-26afa5b84c3c" 00:14:57.417 }, 00:14:57.417 { 00:14:57.417 "nsid": 2, 00:14:57.417 "bdev_name": "Malloc4", 00:14:57.417 "name": "Malloc4", 00:14:57.417 "nguid": "F8F48881582249D09AB85689C09CA66B", 00:14:57.417 "uuid": "f8f48881-5822-49d0-9ab8-5689c09ca66b" 00:14:57.417 } 00:14:57.417 ] 00:14:57.417 } 00:14:57.417 ] 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 27906 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 20388 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 20388 ']' 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 20388 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 20388 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 20388' 00:14:57.417 killing process with pid 20388 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 20388 00:14:57.417 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 20388 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=28034 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 28034' 00:14:57.677 Process pid: 28034 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 28034 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 28034 ']' 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.677 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:57.677 [2024-12-10 04:01:56.824109] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:57.677 [2024-12-10 04:01:56.824927] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:57.677 [2024-12-10 04:01:56.824966] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.677 [2024-12-10 04:01:56.899613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.677 [2024-12-10 04:01:56.939716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.677 [2024-12-10 04:01:56.939754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.677 [2024-12-10 04:01:56.939761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.677 [2024-12-10 04:01:56.939767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.677 [2024-12-10 04:01:56.939772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.677 [2024-12-10 04:01:56.941206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.677 [2024-12-10 04:01:56.941314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.677 [2024-12-10 04:01:56.941424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.677 [2024-12-10 04:01:56.941423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.937 [2024-12-10 04:01:57.008828] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:57.937 [2024-12-10 04:01:57.009531] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:57.937 [2024-12-10 04:01:57.009873] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:57.937 [2024-12-10 04:01:57.010299] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:57.937 [2024-12-10 04:01:57.010327] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:57.937 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.937 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:57.937 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:58.876 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:59.135 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:59.135 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:59.135 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.135 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:59.135 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:59.394 Malloc1 00:14:59.394 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:59.394 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:59.653 04:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:59.912 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.912 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:59.912 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:00.171 Malloc2 00:15:00.171 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:00.171 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:00.431 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 28034 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 28034 ']' 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 28034 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 28034 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 28034' 00:15:00.765 killing process with pid 28034 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 28034 00:15:00.765 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 28034 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:01.097 00:15:01.097 real 0m50.715s 00:15:01.097 user 3m16.286s 00:15:01.097 sys 0m3.162s 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:01.097 ************************************ 00:15:01.097 END TEST nvmf_vfio_user 00:15:01.097 ************************************ 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.097 ************************************ 00:15:01.097 START TEST nvmf_vfio_user_nvme_compliance 00:15:01.097 ************************************ 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:01.097 * Looking for test storage... 00:15:01.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:01.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.097 --rc genhtml_branch_coverage=1 00:15:01.097 --rc genhtml_function_coverage=1 00:15:01.097 --rc genhtml_legend=1 00:15:01.097 --rc geninfo_all_blocks=1 00:15:01.097 --rc geninfo_unexecuted_blocks=1 00:15:01.097 00:15:01.097 ' 00:15:01.097 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:01.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.098 --rc genhtml_branch_coverage=1 00:15:01.098 --rc genhtml_function_coverage=1 00:15:01.098 --rc genhtml_legend=1 00:15:01.098 --rc geninfo_all_blocks=1 00:15:01.098 --rc geninfo_unexecuted_blocks=1 00:15:01.098 00:15:01.098 ' 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:01.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.098 --rc genhtml_branch_coverage=1 00:15:01.098 --rc genhtml_function_coverage=1 00:15:01.098 --rc genhtml_legend=1 00:15:01.098 --rc geninfo_all_blocks=1 00:15:01.098 --rc geninfo_unexecuted_blocks=1 00:15:01.098 00:15:01.098 ' 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:01.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.098 --rc genhtml_branch_coverage=1 00:15:01.098 --rc genhtml_function_coverage=1 00:15:01.098 --rc genhtml_legend=1 00:15:01.098 --rc geninfo_all_blocks=1 00:15:01.098 --rc geninfo_unexecuted_blocks=1 00:15:01.098 00:15:01.098 ' 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.098 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=28779 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 28779' 00:15:01.402 Process pid: 28779 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 28779 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 28779 ']' 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:01.402 [2024-12-10 04:02:00.409061] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:15:01.402 [2024-12-10 04:02:00.409113] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.402 [2024-12-10 04:02:00.483030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:01.402 [2024-12-10 04:02:00.523436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.402 [2024-12-10 04:02:00.523472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.402 [2024-12-10 04:02:00.523479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.402 [2024-12-10 04:02:00.523485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.402 [2024-12-10 04:02:00.523490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.402 [2024-12-10 04:02:00.524706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.402 [2024-12-10 04:02:00.524816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.402 [2024-12-10 04:02:00.524818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:01.402 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:02.339 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:02.339 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:02.339 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:02.339 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.339 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:02.598 malloc0 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.598 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:02.598 00:15:02.598 00:15:02.598 CUnit - A unit testing framework for C - Version 2.1-3 00:15:02.598 http://cunit.sourceforge.net/ 00:15:02.598 00:15:02.598 00:15:02.598 Suite: nvme_compliance 00:15:02.598 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 04:02:01.856680] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.598 [2024-12-10 04:02:01.858024] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:02.598 [2024-12-10 04:02:01.858041] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:02.598 [2024-12-10 04:02:01.858047] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:02.598 [2024-12-10 04:02:01.859701] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.857 passed 00:15:02.857 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 04:02:01.938275] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.857 [2024-12-10 04:02:01.941291] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.857 passed 00:15:02.857 Test: admin_identify_ns ...[2024-12-10 04:02:02.020013] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.857 [2024-12-10 04:02:02.079177] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:02.857 [2024-12-10 04:02:02.087183] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:02.857 [2024-12-10 04:02:02.108274] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.857 passed 00:15:03.116 Test: admin_get_features_mandatory_features ...[2024-12-10 04:02:02.186933] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.116 [2024-12-10 04:02:02.189953] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.116 passed 00:15:03.116 Test: admin_get_features_optional_features ...[2024-12-10 04:02:02.267487] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.116 [2024-12-10 04:02:02.270507] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.116 passed 00:15:03.116 Test: admin_set_features_number_of_queues ...[2024-12-10 04:02:02.345285] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.375 [2024-12-10 04:02:02.450273] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.375 passed 00:15:03.375 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 04:02:02.525951] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.375 [2024-12-10 04:02:02.528971] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.375 passed 00:15:03.375 Test: admin_get_log_page_with_lpo ...[2024-12-10 04:02:02.606732] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.634 [2024-12-10 04:02:02.675180] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:03.634 [2024-12-10 04:02:02.688235] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.634 passed 00:15:03.634 Test: fabric_property_get ...[2024-12-10 04:02:02.761936] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.634 [2024-12-10 04:02:02.763174] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:03.634 [2024-12-10 04:02:02.764959] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.634 passed 00:15:03.634 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 04:02:02.841441] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.634 [2024-12-10 04:02:02.842667] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:03.634 [2024-12-10 04:02:02.846468] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.634 passed 00:15:03.893 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 04:02:02.921473] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.893 [2024-12-10 04:02:03.009176] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:03.893 [2024-12-10 04:02:03.025171] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:03.893 [2024-12-10 04:02:03.030246] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.893 passed 00:15:03.893 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 04:02:03.103007] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.893 [2024-12-10 04:02:03.104260] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:03.893 [2024-12-10 04:02:03.106029] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.893 passed 00:15:04.152 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 04:02:03.182786] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.152 [2024-12-10 04:02:03.258172] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:04.152 [2024-12-10 04:02:03.282176] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:04.152 [2024-12-10 04:02:03.287270] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.152 passed 00:15:04.152 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 04:02:03.362806] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.152 [2024-12-10 04:02:03.364031] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:04.152 [2024-12-10 04:02:03.364057] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:04.152 [2024-12-10 04:02:03.365830] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.152 passed 00:15:04.411 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 04:02:03.444623] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.411 [2024-12-10 04:02:03.533177] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:04.411 [2024-12-10 04:02:03.541189] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:04.411 [2024-12-10 04:02:03.553173] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:04.411 [2024-12-10 04:02:03.561179] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:04.411 [2024-12-10 04:02:03.590267] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.411 passed 00:15:04.411 Test: admin_create_io_sq_verify_pc ...[2024-12-10 04:02:03.661983] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.411 [2024-12-10 04:02:03.681183] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:04.669 [2024-12-10 04:02:03.699178] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.669 passed 00:15:04.669 Test: admin_create_io_qp_max_qps ...[2024-12-10 04:02:03.772688] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.607 [2024-12-10 04:02:04.859176] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:06.175 [2024-12-10 04:02:05.247548] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.175 passed 00:15:06.175 Test: admin_create_io_sq_shared_cq ...[2024-12-10 04:02:05.324427] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.434 [2024-12-10 04:02:05.460174] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:06.434 [2024-12-10 04:02:05.497246] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.434 passed 00:15:06.434 00:15:06.434 Run Summary: Type Total Ran Passed Failed Inactive 00:15:06.434 suites 1 1 n/a 0 0 00:15:06.434 tests 18 18 18 0 0 00:15:06.434 asserts 360 360 360 0 n/a 00:15:06.434 00:15:06.434 Elapsed time = 1.494 seconds 00:15:06.434 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 28779 00:15:06.434 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 28779 ']' 00:15:06.434 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 28779 00:15:06.434 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:06.434 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.434 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 28779 00:15:06.434 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.434 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.434 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 28779' 00:15:06.434 killing process with pid 28779 00:15:06.434 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 28779 00:15:06.434 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 28779 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:06.693 00:15:06.693 real 0m5.620s 00:15:06.693 user 0m15.757s 00:15:06.693 sys 0m0.503s 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:06.693 ************************************ 00:15:06.693 END TEST nvmf_vfio_user_nvme_compliance 00:15:06.693 ************************************ 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:06.693 ************************************ 00:15:06.693 START TEST nvmf_vfio_user_fuzz 00:15:06.693 ************************************ 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:06.693 * Looking for test storage... 00:15:06.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:06.693 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:06.953 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:06.954 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:06.954 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:06.954 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:06.954 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.954 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:06.954 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:06.954 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.954 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:06.954 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:06.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.954 --rc genhtml_branch_coverage=1 00:15:06.954 --rc genhtml_function_coverage=1 00:15:06.954 --rc genhtml_legend=1 00:15:06.954 --rc geninfo_all_blocks=1 00:15:06.954 --rc geninfo_unexecuted_blocks=1 00:15:06.954 00:15:06.954 ' 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:06.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.954 --rc genhtml_branch_coverage=1 00:15:06.954 --rc genhtml_function_coverage=1 00:15:06.954 --rc genhtml_legend=1 00:15:06.954 --rc geninfo_all_blocks=1 00:15:06.954 --rc geninfo_unexecuted_blocks=1 00:15:06.954 00:15:06.954 ' 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:06.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.954 --rc genhtml_branch_coverage=1 00:15:06.954 --rc genhtml_function_coverage=1 00:15:06.954 --rc genhtml_legend=1 00:15:06.954 --rc geninfo_all_blocks=1 00:15:06.954 --rc geninfo_unexecuted_blocks=1 00:15:06.954 00:15:06.954 ' 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:06.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.954 --rc genhtml_branch_coverage=1 00:15:06.954 --rc genhtml_function_coverage=1 00:15:06.954 --rc genhtml_legend=1 00:15:06.954 --rc geninfo_all_blocks=1 00:15:06.954 --rc geninfo_unexecuted_blocks=1 00:15:06.954 00:15:06.954 ' 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:06.954 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:06.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=29742 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 29742' 00:15:06.955 Process pid: 29742 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 29742 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 29742 ']' 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.955 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:07.214 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.214 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:07.214 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.153 malloc0 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:08.153 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:40.234 Fuzzing completed. Shutting down the fuzz application 00:15:40.234 00:15:40.234 Dumping successful admin opcodes: 00:15:40.234 9, 10, 00:15:40.234 Dumping successful io opcodes: 00:15:40.234 0, 00:15:40.234 NS: 0x20000081ef00 I/O qp, Total commands completed: 1014257, total successful commands: 3978, random_seed: 2148271232 00:15:40.234 NS: 0x20000081ef00 admin qp, Total commands completed: 248096, total successful commands: 58, random_seed: 660148480 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 29742 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 29742 ']' 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 29742 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 29742 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 29742' 00:15:40.234 killing process with pid 29742 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 29742 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 29742 00:15:40.234 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:40.234 00:15:40.234 real 0m32.209s 00:15:40.234 user 0m30.009s 00:15:40.234 sys 0m31.173s 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:40.234 ************************************ 00:15:40.234 END TEST nvmf_vfio_user_fuzz 00:15:40.234 ************************************ 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.234 ************************************ 00:15:40.234 START TEST nvmf_auth_target 00:15:40.234 ************************************ 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:40.234 * Looking for test storage... 00:15:40.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.234 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:40.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.234 --rc genhtml_branch_coverage=1 00:15:40.234 --rc genhtml_function_coverage=1 00:15:40.234 --rc genhtml_legend=1 00:15:40.234 --rc geninfo_all_blocks=1 00:15:40.234 --rc geninfo_unexecuted_blocks=1 00:15:40.234 00:15:40.234 ' 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.235 --rc genhtml_branch_coverage=1 00:15:40.235 --rc genhtml_function_coverage=1 00:15:40.235 --rc genhtml_legend=1 00:15:40.235 --rc geninfo_all_blocks=1 00:15:40.235 --rc geninfo_unexecuted_blocks=1 00:15:40.235 00:15:40.235 ' 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.235 --rc genhtml_branch_coverage=1 00:15:40.235 --rc genhtml_function_coverage=1 00:15:40.235 --rc genhtml_legend=1 00:15:40.235 --rc geninfo_all_blocks=1 00:15:40.235 --rc geninfo_unexecuted_blocks=1 00:15:40.235 00:15:40.235 ' 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:40.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.235 --rc genhtml_branch_coverage=1 00:15:40.235 --rc genhtml_function_coverage=1 00:15:40.235 --rc genhtml_legend=1 00:15:40.235 --rc geninfo_all_blocks=1 00:15:40.235 --rc geninfo_unexecuted_blocks=1 00:15:40.235 00:15:40.235 ' 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:40.235 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:45.511 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:45.511 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:45.511 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:45.512 Found net devices under 0000:af:00.0: cvl_0_0 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:45.512 Found net devices under 0000:af:00.1: cvl_0_1 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:45.512 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:45.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:15:45.512 00:15:45.512 --- 10.0.0.2 ping statistics --- 00:15:45.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.512 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:45.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:15:45.512 00:15:45.512 --- 10.0.0.1 ping statistics --- 00:15:45.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.512 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=38022 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 38022 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 38022 ']' 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=38103 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a638062bfc2ea1edf0ecc07812517c4fbbe4aa0015aa58c5 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KdC 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a638062bfc2ea1edf0ecc07812517c4fbbe4aa0015aa58c5 0 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a638062bfc2ea1edf0ecc07812517c4fbbe4aa0015aa58c5 0 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a638062bfc2ea1edf0ecc07812517c4fbbe4aa0015aa58c5 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KdC 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KdC 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.KdC 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:45.512 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e89752946aaac61ea88d2f8b6151fed20df0ffd21a4a5cbb9f51c5a481bf848d 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.zS0 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e89752946aaac61ea88d2f8b6151fed20df0ffd21a4a5cbb9f51c5a481bf848d 3 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e89752946aaac61ea88d2f8b6151fed20df0ffd21a4a5cbb9f51c5a481bf848d 3 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e89752946aaac61ea88d2f8b6151fed20df0ffd21a4a5cbb9f51c5a481bf848d 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.zS0 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.zS0 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.zS0 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=02a1bb559c7cc4973f3b12141b029420 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.fBo 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 02a1bb559c7cc4973f3b12141b029420 1 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 02a1bb559c7cc4973f3b12141b029420 1 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=02a1bb559c7cc4973f3b12141b029420 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.fBo 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.fBo 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.fBo 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=19d8a545528169ca7fa928165266bc51f1b5553f5b199a6d 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.mt3 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 19d8a545528169ca7fa928165266bc51f1b5553f5b199a6d 2 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 19d8a545528169ca7fa928165266bc51f1b5553f5b199a6d 2 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=19d8a545528169ca7fa928165266bc51f1b5553f5b199a6d 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.mt3 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.mt3 00:15:45.513 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.mt3 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ea064b3220d6dc5a2ac8dc1a1d73c7c3d8151ed269018e21 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.g6R 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ea064b3220d6dc5a2ac8dc1a1d73c7c3d8151ed269018e21 2 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ea064b3220d6dc5a2ac8dc1a1d73c7c3d8151ed269018e21 2 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ea064b3220d6dc5a2ac8dc1a1d73c7c3d8151ed269018e21 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.g6R 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.g6R 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.g6R 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cf3d3396811d7c0f7ca7273f5920305e 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vmJ 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cf3d3396811d7c0f7ca7273f5920305e 1 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cf3d3396811d7c0f7ca7273f5920305e 1 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cf3d3396811d7c0f7ca7273f5920305e 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vmJ 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vmJ 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.vmJ 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c8617124df8fea0ccbce2098cac4fbdf4b9ef8a7293dbb8b4e0db9777c401615 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Qfm 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c8617124df8fea0ccbce2098cac4fbdf4b9ef8a7293dbb8b4e0db9777c401615 3 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c8617124df8fea0ccbce2098cac4fbdf4b9ef8a7293dbb8b4e0db9777c401615 3 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c8617124df8fea0ccbce2098cac4fbdf4b9ef8a7293dbb8b4e0db9777c401615 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Qfm 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Qfm 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Qfm 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 38022 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 38022 ']' 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.031 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.031 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:46.031 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 38103 /var/tmp/host.sock 00:15:46.031 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 38103 ']' 00:15:46.031 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:46.031 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.031 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:46.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:46.031 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.031 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KdC 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.KdC 00:15:46.289 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.KdC 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.zS0 ]] 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zS0 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zS0 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zS0 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fBo 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.fBo 00:15:46.547 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.fBo 00:15:46.805 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.mt3 ]] 00:15:46.805 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mt3 00:15:46.805 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.805 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.805 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.805 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mt3 00:15:46.805 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mt3 00:15:47.063 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:47.063 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.g6R 00:15:47.063 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.063 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.063 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.063 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.g6R 00:15:47.063 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.g6R 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.vmJ ]] 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vmJ 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vmJ 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vmJ 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Qfm 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Qfm 00:15:47.322 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Qfm 00:15:47.581 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:47.581 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:47.581 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.581 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.581 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.581 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.839 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.098 00:15:48.098 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.098 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.098 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.356 { 00:15:48.356 "cntlid": 1, 00:15:48.356 "qid": 0, 00:15:48.356 "state": "enabled", 00:15:48.356 "thread": "nvmf_tgt_poll_group_000", 00:15:48.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:48.356 "listen_address": { 00:15:48.356 "trtype": "TCP", 00:15:48.356 "adrfam": "IPv4", 00:15:48.356 "traddr": "10.0.0.2", 00:15:48.356 "trsvcid": "4420" 00:15:48.356 }, 00:15:48.356 "peer_address": { 00:15:48.356 "trtype": "TCP", 00:15:48.356 "adrfam": "IPv4", 00:15:48.356 "traddr": "10.0.0.1", 00:15:48.356 "trsvcid": "41154" 00:15:48.356 }, 00:15:48.356 "auth": { 00:15:48.356 "state": "completed", 00:15:48.356 "digest": "sha256", 00:15:48.356 "dhgroup": "null" 00:15:48.356 } 00:15:48.356 } 00:15:48.356 ]' 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.356 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.615 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:15:48.615 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:15:49.182 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.182 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:49.182 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.182 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.182 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.182 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.182 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.182 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.440 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.699 00:15:49.699 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.699 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.699 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.699 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.699 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.699 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.699 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.958 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.958 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.958 { 00:15:49.958 "cntlid": 3, 00:15:49.958 "qid": 0, 00:15:49.958 "state": "enabled", 00:15:49.958 "thread": "nvmf_tgt_poll_group_000", 00:15:49.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:49.958 "listen_address": { 00:15:49.958 "trtype": "TCP", 00:15:49.958 "adrfam": "IPv4", 00:15:49.958 "traddr": "10.0.0.2", 00:15:49.958 "trsvcid": "4420" 00:15:49.958 }, 00:15:49.958 "peer_address": { 00:15:49.958 "trtype": "TCP", 00:15:49.958 "adrfam": "IPv4", 00:15:49.958 "traddr": "10.0.0.1", 00:15:49.958 "trsvcid": "41188" 00:15:49.958 }, 00:15:49.958 "auth": { 00:15:49.958 "state": "completed", 00:15:49.958 "digest": "sha256", 00:15:49.958 "dhgroup": "null" 00:15:49.958 } 00:15:49.958 } 00:15:49.958 ]' 00:15:49.958 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.958 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.958 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.958 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.958 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.958 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.958 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.958 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.217 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:15:50.217 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:15:50.785 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.785 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:50.785 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.785 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.785 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.785 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.785 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:50.785 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.043 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.044 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.044 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.302 { 00:15:51.302 "cntlid": 5, 00:15:51.302 "qid": 0, 00:15:51.302 "state": "enabled", 00:15:51.302 "thread": "nvmf_tgt_poll_group_000", 00:15:51.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:51.302 "listen_address": { 00:15:51.302 "trtype": "TCP", 00:15:51.302 "adrfam": "IPv4", 00:15:51.302 "traddr": "10.0.0.2", 00:15:51.302 "trsvcid": "4420" 00:15:51.302 }, 00:15:51.302 "peer_address": { 00:15:51.302 "trtype": "TCP", 00:15:51.302 "adrfam": "IPv4", 00:15:51.302 "traddr": "10.0.0.1", 00:15:51.302 "trsvcid": "41208" 00:15:51.302 }, 00:15:51.302 "auth": { 00:15:51.302 "state": "completed", 00:15:51.302 "digest": "sha256", 00:15:51.302 "dhgroup": "null" 00:15:51.302 } 00:15:51.302 } 00:15:51.302 ]' 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.302 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.561 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.561 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.561 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.561 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.561 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.561 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:15:51.819 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.386 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.645 00:15:52.645 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.645 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.645 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.903 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.903 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.903 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.903 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.903 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.903 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.903 { 00:15:52.903 "cntlid": 7, 00:15:52.903 "qid": 0, 00:15:52.903 "state": "enabled", 00:15:52.903 "thread": "nvmf_tgt_poll_group_000", 00:15:52.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:52.903 "listen_address": { 00:15:52.903 "trtype": "TCP", 00:15:52.903 "adrfam": "IPv4", 00:15:52.903 "traddr": "10.0.0.2", 00:15:52.903 "trsvcid": "4420" 00:15:52.903 }, 00:15:52.903 "peer_address": { 00:15:52.903 "trtype": "TCP", 00:15:52.903 "adrfam": "IPv4", 00:15:52.903 "traddr": "10.0.0.1", 00:15:52.903 "trsvcid": "41236" 00:15:52.903 }, 00:15:52.903 "auth": { 00:15:52.903 "state": "completed", 00:15:52.903 "digest": "sha256", 00:15:52.904 "dhgroup": "null" 00:15:52.904 } 00:15:52.904 } 00:15:52.904 ]' 00:15:52.904 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.904 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.904 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.904 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:52.904 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.161 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.161 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.161 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.161 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:15:53.161 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:15:53.727 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.727 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.985 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.244 00:15:54.244 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.244 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.244 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.502 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.502 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.502 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.502 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.502 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.502 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.502 { 00:15:54.502 "cntlid": 9, 00:15:54.502 "qid": 0, 00:15:54.502 "state": "enabled", 00:15:54.502 "thread": "nvmf_tgt_poll_group_000", 00:15:54.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:54.502 "listen_address": { 00:15:54.502 "trtype": "TCP", 00:15:54.502 "adrfam": "IPv4", 00:15:54.502 "traddr": "10.0.0.2", 00:15:54.502 "trsvcid": "4420" 00:15:54.502 }, 00:15:54.502 "peer_address": { 00:15:54.502 "trtype": "TCP", 00:15:54.502 "adrfam": "IPv4", 00:15:54.502 "traddr": "10.0.0.1", 00:15:54.502 "trsvcid": "41266" 00:15:54.502 }, 00:15:54.502 "auth": { 00:15:54.502 "state": "completed", 00:15:54.502 "digest": "sha256", 00:15:54.502 "dhgroup": "ffdhe2048" 00:15:54.502 } 00:15:54.502 } 00:15:54.502 ]' 00:15:54.502 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.502 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.502 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.502 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.502 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.761 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.761 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.761 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.761 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:15:54.761 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:15:55.328 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.328 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:55.328 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.328 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.328 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.328 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.328 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.328 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.586 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.844 00:15:55.844 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.844 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.844 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.102 { 00:15:56.102 "cntlid": 11, 00:15:56.102 "qid": 0, 00:15:56.102 "state": "enabled", 00:15:56.102 "thread": "nvmf_tgt_poll_group_000", 00:15:56.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:56.102 "listen_address": { 00:15:56.102 "trtype": "TCP", 00:15:56.102 "adrfam": "IPv4", 00:15:56.102 "traddr": "10.0.0.2", 00:15:56.102 "trsvcid": "4420" 00:15:56.102 }, 00:15:56.102 "peer_address": { 00:15:56.102 "trtype": "TCP", 00:15:56.102 "adrfam": "IPv4", 00:15:56.102 "traddr": "10.0.0.1", 00:15:56.102 "trsvcid": "41302" 00:15:56.102 }, 00:15:56.102 "auth": { 00:15:56.102 "state": "completed", 00:15:56.102 "digest": "sha256", 00:15:56.102 "dhgroup": "ffdhe2048" 00:15:56.102 } 00:15:56.102 } 00:15:56.102 ]' 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.102 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.360 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:15:56.360 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:15:56.926 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.926 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:56.926 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.926 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.926 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.926 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.926 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:56.926 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.184 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.443 00:15:57.443 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.443 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.443 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.701 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.701 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.701 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.701 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.701 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.701 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.701 { 00:15:57.701 "cntlid": 13, 00:15:57.701 "qid": 0, 00:15:57.701 "state": "enabled", 00:15:57.701 "thread": "nvmf_tgt_poll_group_000", 00:15:57.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:57.701 "listen_address": { 00:15:57.701 "trtype": "TCP", 00:15:57.701 "adrfam": "IPv4", 00:15:57.701 "traddr": "10.0.0.2", 00:15:57.701 "trsvcid": "4420" 00:15:57.701 }, 00:15:57.701 "peer_address": { 00:15:57.701 "trtype": "TCP", 00:15:57.702 "adrfam": "IPv4", 00:15:57.702 "traddr": "10.0.0.1", 00:15:57.702 "trsvcid": "41326" 00:15:57.702 }, 00:15:57.702 "auth": { 00:15:57.702 "state": "completed", 00:15:57.702 "digest": "sha256", 00:15:57.702 "dhgroup": "ffdhe2048" 00:15:57.702 } 00:15:57.702 } 00:15:57.702 ]' 00:15:57.702 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.702 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.702 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.702 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.702 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.702 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.702 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.702 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.960 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:15:57.960 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:15:58.527 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.527 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:58.527 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.527 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.527 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.527 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.527 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:58.527 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.785 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.044 00:15:59.044 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.044 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.044 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.302 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.303 { 00:15:59.303 "cntlid": 15, 00:15:59.303 "qid": 0, 00:15:59.303 "state": "enabled", 00:15:59.303 "thread": "nvmf_tgt_poll_group_000", 00:15:59.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:59.303 "listen_address": { 00:15:59.303 "trtype": "TCP", 00:15:59.303 "adrfam": "IPv4", 00:15:59.303 "traddr": "10.0.0.2", 00:15:59.303 "trsvcid": "4420" 00:15:59.303 }, 00:15:59.303 "peer_address": { 00:15:59.303 "trtype": "TCP", 00:15:59.303 "adrfam": "IPv4", 00:15:59.303 "traddr": "10.0.0.1", 00:15:59.303 "trsvcid": "56344" 00:15:59.303 }, 00:15:59.303 "auth": { 00:15:59.303 "state": "completed", 00:15:59.303 "digest": "sha256", 00:15:59.303 "dhgroup": "ffdhe2048" 00:15:59.303 } 00:15:59.303 } 00:15:59.303 ]' 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.303 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.561 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:15:59.561 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:00.127 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.127 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:00.127 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.127 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.127 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.127 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.127 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.127 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.127 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.386 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.644 00:16:00.644 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.644 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.644 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.644 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.644 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.644 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.644 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.644 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.644 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.644 { 00:16:00.644 "cntlid": 17, 00:16:00.644 "qid": 0, 00:16:00.644 "state": "enabled", 00:16:00.644 "thread": "nvmf_tgt_poll_group_000", 00:16:00.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:00.644 "listen_address": { 00:16:00.644 "trtype": "TCP", 00:16:00.644 "adrfam": "IPv4", 00:16:00.644 "traddr": "10.0.0.2", 00:16:00.644 "trsvcid": "4420" 00:16:00.644 }, 00:16:00.644 "peer_address": { 00:16:00.644 "trtype": "TCP", 00:16:00.644 "adrfam": "IPv4", 00:16:00.644 "traddr": "10.0.0.1", 00:16:00.644 "trsvcid": "56368" 00:16:00.644 }, 00:16:00.644 "auth": { 00:16:00.644 "state": "completed", 00:16:00.644 "digest": "sha256", 00:16:00.644 "dhgroup": "ffdhe3072" 00:16:00.644 } 00:16:00.644 } 00:16:00.644 ]' 00:16:00.644 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.903 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.903 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.903 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.903 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.903 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.903 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.903 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.161 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:01.161 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:01.728 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.728 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:01.728 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.728 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.728 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.728 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.728 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:01.728 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.986 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.245 00:16:02.245 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.245 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.245 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.245 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.245 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.245 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.245 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.245 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.245 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.245 { 00:16:02.245 "cntlid": 19, 00:16:02.245 "qid": 0, 00:16:02.245 "state": "enabled", 00:16:02.245 "thread": "nvmf_tgt_poll_group_000", 00:16:02.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:02.245 "listen_address": { 00:16:02.245 "trtype": "TCP", 00:16:02.245 "adrfam": "IPv4", 00:16:02.245 "traddr": "10.0.0.2", 00:16:02.245 "trsvcid": "4420" 00:16:02.245 }, 00:16:02.245 "peer_address": { 00:16:02.245 "trtype": "TCP", 00:16:02.245 "adrfam": "IPv4", 00:16:02.245 "traddr": "10.0.0.1", 00:16:02.245 "trsvcid": "56390" 00:16:02.245 }, 00:16:02.245 "auth": { 00:16:02.245 "state": "completed", 00:16:02.245 "digest": "sha256", 00:16:02.245 "dhgroup": "ffdhe3072" 00:16:02.245 } 00:16:02.245 } 00:16:02.245 ]' 00:16:02.245 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.503 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.503 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.503 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:02.503 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.503 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.503 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.503 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.761 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:02.761 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:03.328 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.328 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:03.328 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.328 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.328 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.328 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.328 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:03.328 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:03.586 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:03.586 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.586 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.586 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:03.586 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.586 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.586 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.586 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.586 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.587 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.587 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.587 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.587 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.844 00:16:03.844 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.844 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.844 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.844 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.102 { 00:16:04.102 "cntlid": 21, 00:16:04.102 "qid": 0, 00:16:04.102 "state": "enabled", 00:16:04.102 "thread": "nvmf_tgt_poll_group_000", 00:16:04.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:04.102 "listen_address": { 00:16:04.102 "trtype": "TCP", 00:16:04.102 "adrfam": "IPv4", 00:16:04.102 "traddr": "10.0.0.2", 00:16:04.102 "trsvcid": "4420" 00:16:04.102 }, 00:16:04.102 "peer_address": { 00:16:04.102 "trtype": "TCP", 00:16:04.102 "adrfam": "IPv4", 00:16:04.102 "traddr": "10.0.0.1", 00:16:04.102 "trsvcid": "56402" 00:16:04.102 }, 00:16:04.102 "auth": { 00:16:04.102 "state": "completed", 00:16:04.102 "digest": "sha256", 00:16:04.102 "dhgroup": "ffdhe3072" 00:16:04.102 } 00:16:04.102 } 00:16:04.102 ]' 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.102 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.103 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.360 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:04.360 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:04.926 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.926 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:04.926 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.926 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.926 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.926 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.926 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.926 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.184 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.443 00:16:05.443 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.443 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.443 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.702 { 00:16:05.702 "cntlid": 23, 00:16:05.702 "qid": 0, 00:16:05.702 "state": "enabled", 00:16:05.702 "thread": "nvmf_tgt_poll_group_000", 00:16:05.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:05.702 "listen_address": { 00:16:05.702 "trtype": "TCP", 00:16:05.702 "adrfam": "IPv4", 00:16:05.702 "traddr": "10.0.0.2", 00:16:05.702 "trsvcid": "4420" 00:16:05.702 }, 00:16:05.702 "peer_address": { 00:16:05.702 "trtype": "TCP", 00:16:05.702 "adrfam": "IPv4", 00:16:05.702 "traddr": "10.0.0.1", 00:16:05.702 "trsvcid": "56418" 00:16:05.702 }, 00:16:05.702 "auth": { 00:16:05.702 "state": "completed", 00:16:05.702 "digest": "sha256", 00:16:05.702 "dhgroup": "ffdhe3072" 00:16:05.702 } 00:16:05.702 } 00:16:05.702 ]' 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.702 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.960 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:05.960 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:06.527 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.527 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:06.527 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.527 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.527 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.527 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.527 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.528 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.528 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.787 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.045 00:16:07.045 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.045 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.045 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.045 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.045 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.045 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.045 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.304 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.304 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.304 { 00:16:07.304 "cntlid": 25, 00:16:07.304 "qid": 0, 00:16:07.304 "state": "enabled", 00:16:07.304 "thread": "nvmf_tgt_poll_group_000", 00:16:07.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:07.304 "listen_address": { 00:16:07.304 "trtype": "TCP", 00:16:07.304 "adrfam": "IPv4", 00:16:07.304 "traddr": "10.0.0.2", 00:16:07.304 "trsvcid": "4420" 00:16:07.304 }, 00:16:07.304 "peer_address": { 00:16:07.304 "trtype": "TCP", 00:16:07.304 "adrfam": "IPv4", 00:16:07.304 "traddr": "10.0.0.1", 00:16:07.304 "trsvcid": "56440" 00:16:07.304 }, 00:16:07.304 "auth": { 00:16:07.304 "state": "completed", 00:16:07.304 "digest": "sha256", 00:16:07.304 "dhgroup": "ffdhe4096" 00:16:07.304 } 00:16:07.304 } 00:16:07.304 ]' 00:16:07.304 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.304 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.304 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.304 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.304 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.304 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.304 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.304 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.585 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:07.585 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.199 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.200 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.458 00:16:08.458 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.458 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.458 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.716 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.716 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.716 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.716 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.716 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.716 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.716 { 00:16:08.716 "cntlid": 27, 00:16:08.716 "qid": 0, 00:16:08.716 "state": "enabled", 00:16:08.716 "thread": "nvmf_tgt_poll_group_000", 00:16:08.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:08.716 "listen_address": { 00:16:08.716 "trtype": "TCP", 00:16:08.716 "adrfam": "IPv4", 00:16:08.716 "traddr": "10.0.0.2", 00:16:08.716 "trsvcid": "4420" 00:16:08.716 }, 00:16:08.716 "peer_address": { 00:16:08.716 "trtype": "TCP", 00:16:08.716 "adrfam": "IPv4", 00:16:08.716 "traddr": "10.0.0.1", 00:16:08.716 "trsvcid": "38128" 00:16:08.716 }, 00:16:08.716 "auth": { 00:16:08.716 "state": "completed", 00:16:08.716 "digest": "sha256", 00:16:08.716 "dhgroup": "ffdhe4096" 00:16:08.716 } 00:16:08.716 } 00:16:08.716 ]' 00:16:08.716 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.716 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.716 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.716 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.716 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.974 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.974 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.974 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.974 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:08.974 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:09.540 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.540 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:09.540 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.540 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.540 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.540 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.540 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:09.540 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:09.799 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:09.799 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.799 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.799 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:09.799 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.799 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.799 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.799 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.799 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.799 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.799 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.799 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.799 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.057 00:16:10.057 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.057 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.057 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.314 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.315 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.315 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.315 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.315 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.315 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.315 { 00:16:10.315 "cntlid": 29, 00:16:10.315 "qid": 0, 00:16:10.315 "state": "enabled", 00:16:10.315 "thread": "nvmf_tgt_poll_group_000", 00:16:10.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:10.315 "listen_address": { 00:16:10.315 "trtype": "TCP", 00:16:10.315 "adrfam": "IPv4", 00:16:10.315 "traddr": "10.0.0.2", 00:16:10.315 "trsvcid": "4420" 00:16:10.315 }, 00:16:10.315 "peer_address": { 00:16:10.315 "trtype": "TCP", 00:16:10.315 "adrfam": "IPv4", 00:16:10.315 "traddr": "10.0.0.1", 00:16:10.315 "trsvcid": "38164" 00:16:10.315 }, 00:16:10.315 "auth": { 00:16:10.315 "state": "completed", 00:16:10.315 "digest": "sha256", 00:16:10.315 "dhgroup": "ffdhe4096" 00:16:10.315 } 00:16:10.315 } 00:16:10.315 ]' 00:16:10.315 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.315 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.315 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.315 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.315 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.573 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.573 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.573 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.573 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:10.573 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:11.139 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.139 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:11.139 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.139 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.139 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.139 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.397 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.654 00:16:11.654 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.654 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.654 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.912 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.912 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.912 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.912 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.912 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.912 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.912 { 00:16:11.912 "cntlid": 31, 00:16:11.912 "qid": 0, 00:16:11.912 "state": "enabled", 00:16:11.912 "thread": "nvmf_tgt_poll_group_000", 00:16:11.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:11.912 "listen_address": { 00:16:11.912 "trtype": "TCP", 00:16:11.912 "adrfam": "IPv4", 00:16:11.912 "traddr": "10.0.0.2", 00:16:11.912 "trsvcid": "4420" 00:16:11.912 }, 00:16:11.912 "peer_address": { 00:16:11.912 "trtype": "TCP", 00:16:11.912 "adrfam": "IPv4", 00:16:11.912 "traddr": "10.0.0.1", 00:16:11.912 "trsvcid": "38184" 00:16:11.912 }, 00:16:11.912 "auth": { 00:16:11.912 "state": "completed", 00:16:11.912 "digest": "sha256", 00:16:11.912 "dhgroup": "ffdhe4096" 00:16:11.912 } 00:16:11.912 } 00:16:11.912 ]' 00:16:11.912 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.912 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.912 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.912 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.912 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.169 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.170 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.170 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.170 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:12.170 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:12.735 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.735 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:12.735 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.735 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.735 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.735 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.735 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.735 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:12.735 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.993 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.252 00:16:13.510 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.510 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.510 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.510 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.510 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.510 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.510 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.510 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.510 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.510 { 00:16:13.510 "cntlid": 33, 00:16:13.510 "qid": 0, 00:16:13.510 "state": "enabled", 00:16:13.510 "thread": "nvmf_tgt_poll_group_000", 00:16:13.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:13.510 "listen_address": { 00:16:13.510 "trtype": "TCP", 00:16:13.510 "adrfam": "IPv4", 00:16:13.510 "traddr": "10.0.0.2", 00:16:13.510 "trsvcid": "4420" 00:16:13.510 }, 00:16:13.510 "peer_address": { 00:16:13.510 "trtype": "TCP", 00:16:13.510 "adrfam": "IPv4", 00:16:13.510 "traddr": "10.0.0.1", 00:16:13.510 "trsvcid": "38214" 00:16:13.510 }, 00:16:13.510 "auth": { 00:16:13.510 "state": "completed", 00:16:13.510 "digest": "sha256", 00:16:13.510 "dhgroup": "ffdhe6144" 00:16:13.510 } 00:16:13.510 } 00:16:13.510 ]' 00:16:13.510 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.510 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.767 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.767 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.767 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.767 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.767 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.767 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.025 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:14.025 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:14.590 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.590 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:14.590 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.590 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.590 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.591 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.848 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.849 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.849 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.849 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.107 00:16:15.107 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.107 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.107 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.365 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.365 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.365 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.365 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.366 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.366 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.366 { 00:16:15.366 "cntlid": 35, 00:16:15.366 "qid": 0, 00:16:15.366 "state": "enabled", 00:16:15.366 "thread": "nvmf_tgt_poll_group_000", 00:16:15.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:15.366 "listen_address": { 00:16:15.366 "trtype": "TCP", 00:16:15.366 "adrfam": "IPv4", 00:16:15.366 "traddr": "10.0.0.2", 00:16:15.366 "trsvcid": "4420" 00:16:15.366 }, 00:16:15.366 "peer_address": { 00:16:15.366 "trtype": "TCP", 00:16:15.366 "adrfam": "IPv4", 00:16:15.366 "traddr": "10.0.0.1", 00:16:15.366 "trsvcid": "38240" 00:16:15.366 }, 00:16:15.366 "auth": { 00:16:15.366 "state": "completed", 00:16:15.366 "digest": "sha256", 00:16:15.366 "dhgroup": "ffdhe6144" 00:16:15.366 } 00:16:15.366 } 00:16:15.366 ]' 00:16:15.366 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.366 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.366 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.366 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.366 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.366 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.366 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.366 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.624 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:15.624 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:16.190 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.190 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:16.190 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.190 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.190 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.190 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.190 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.190 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.448 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.706 00:16:16.706 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.706 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.706 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.964 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.965 { 00:16:16.965 "cntlid": 37, 00:16:16.965 "qid": 0, 00:16:16.965 "state": "enabled", 00:16:16.965 "thread": "nvmf_tgt_poll_group_000", 00:16:16.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:16.965 "listen_address": { 00:16:16.965 "trtype": "TCP", 00:16:16.965 "adrfam": "IPv4", 00:16:16.965 "traddr": "10.0.0.2", 00:16:16.965 "trsvcid": "4420" 00:16:16.965 }, 00:16:16.965 "peer_address": { 00:16:16.965 "trtype": "TCP", 00:16:16.965 "adrfam": "IPv4", 00:16:16.965 "traddr": "10.0.0.1", 00:16:16.965 "trsvcid": "38262" 00:16:16.965 }, 00:16:16.965 "auth": { 00:16:16.965 "state": "completed", 00:16:16.965 "digest": "sha256", 00:16:16.965 "dhgroup": "ffdhe6144" 00:16:16.965 } 00:16:16.965 } 00:16:16.965 ]' 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.965 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.223 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:17.223 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:17.790 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.790 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:17.790 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.790 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.790 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.790 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.790 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:17.790 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.048 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.307 00:16:18.307 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.307 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.307 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.565 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.565 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.565 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.565 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.565 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.565 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.565 { 00:16:18.565 "cntlid": 39, 00:16:18.565 "qid": 0, 00:16:18.565 "state": "enabled", 00:16:18.565 "thread": "nvmf_tgt_poll_group_000", 00:16:18.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:18.565 "listen_address": { 00:16:18.565 "trtype": "TCP", 00:16:18.565 "adrfam": "IPv4", 00:16:18.565 "traddr": "10.0.0.2", 00:16:18.565 "trsvcid": "4420" 00:16:18.565 }, 00:16:18.565 "peer_address": { 00:16:18.565 "trtype": "TCP", 00:16:18.565 "adrfam": "IPv4", 00:16:18.565 "traddr": "10.0.0.1", 00:16:18.565 "trsvcid": "41842" 00:16:18.565 }, 00:16:18.565 "auth": { 00:16:18.565 "state": "completed", 00:16:18.565 "digest": "sha256", 00:16:18.565 "dhgroup": "ffdhe6144" 00:16:18.565 } 00:16:18.565 } 00:16:18.565 ]' 00:16:18.565 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.565 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.565 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.565 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.565 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.823 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.823 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.823 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.823 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:18.824 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:19.390 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.390 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:19.390 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.390 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.390 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.390 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.390 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.390 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:19.390 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.648 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.215 00:16:20.215 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.215 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.215 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.473 { 00:16:20.473 "cntlid": 41, 00:16:20.473 "qid": 0, 00:16:20.473 "state": "enabled", 00:16:20.473 "thread": "nvmf_tgt_poll_group_000", 00:16:20.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:20.473 "listen_address": { 00:16:20.473 "trtype": "TCP", 00:16:20.473 "adrfam": "IPv4", 00:16:20.473 "traddr": "10.0.0.2", 00:16:20.473 "trsvcid": "4420" 00:16:20.473 }, 00:16:20.473 "peer_address": { 00:16:20.473 "trtype": "TCP", 00:16:20.473 "adrfam": "IPv4", 00:16:20.473 "traddr": "10.0.0.1", 00:16:20.473 "trsvcid": "41850" 00:16:20.473 }, 00:16:20.473 "auth": { 00:16:20.473 "state": "completed", 00:16:20.473 "digest": "sha256", 00:16:20.473 "dhgroup": "ffdhe8192" 00:16:20.473 } 00:16:20.473 } 00:16:20.473 ]' 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.473 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.732 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:20.732 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:21.299 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.299 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:21.299 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.299 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.299 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.299 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.299 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:21.299 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.557 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.124 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.124 { 00:16:22.124 "cntlid": 43, 00:16:22.124 "qid": 0, 00:16:22.124 "state": "enabled", 00:16:22.124 "thread": "nvmf_tgt_poll_group_000", 00:16:22.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:22.124 "listen_address": { 00:16:22.124 "trtype": "TCP", 00:16:22.124 "adrfam": "IPv4", 00:16:22.124 "traddr": "10.0.0.2", 00:16:22.124 "trsvcid": "4420" 00:16:22.124 }, 00:16:22.124 "peer_address": { 00:16:22.124 "trtype": "TCP", 00:16:22.124 "adrfam": "IPv4", 00:16:22.124 "traddr": "10.0.0.1", 00:16:22.124 "trsvcid": "41872" 00:16:22.124 }, 00:16:22.124 "auth": { 00:16:22.124 "state": "completed", 00:16:22.124 "digest": "sha256", 00:16:22.124 "dhgroup": "ffdhe8192" 00:16:22.124 } 00:16:22.124 } 00:16:22.124 ]' 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.124 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.383 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:22.383 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.383 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.383 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.383 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.642 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:22.642 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.209 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.777 00:16:23.777 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.777 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.777 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.035 { 00:16:24.035 "cntlid": 45, 00:16:24.035 "qid": 0, 00:16:24.035 "state": "enabled", 00:16:24.035 "thread": "nvmf_tgt_poll_group_000", 00:16:24.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:24.035 "listen_address": { 00:16:24.035 "trtype": "TCP", 00:16:24.035 "adrfam": "IPv4", 00:16:24.035 "traddr": "10.0.0.2", 00:16:24.035 "trsvcid": "4420" 00:16:24.035 }, 00:16:24.035 "peer_address": { 00:16:24.035 "trtype": "TCP", 00:16:24.035 "adrfam": "IPv4", 00:16:24.035 "traddr": "10.0.0.1", 00:16:24.035 "trsvcid": "41898" 00:16:24.035 }, 00:16:24.035 "auth": { 00:16:24.035 "state": "completed", 00:16:24.035 "digest": "sha256", 00:16:24.035 "dhgroup": "ffdhe8192" 00:16:24.035 } 00:16:24.035 } 00:16:24.035 ]' 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.035 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.036 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.295 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:24.295 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:24.862 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.862 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:24.862 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.862 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.862 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.862 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.862 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:24.862 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.121 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.688 00:16:25.688 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.688 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.688 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.688 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.949 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.949 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.949 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.949 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.949 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.949 { 00:16:25.949 "cntlid": 47, 00:16:25.949 "qid": 0, 00:16:25.949 "state": "enabled", 00:16:25.949 "thread": "nvmf_tgt_poll_group_000", 00:16:25.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:25.949 "listen_address": { 00:16:25.949 "trtype": "TCP", 00:16:25.949 "adrfam": "IPv4", 00:16:25.949 "traddr": "10.0.0.2", 00:16:25.949 "trsvcid": "4420" 00:16:25.949 }, 00:16:25.949 "peer_address": { 00:16:25.949 "trtype": "TCP", 00:16:25.949 "adrfam": "IPv4", 00:16:25.949 "traddr": "10.0.0.1", 00:16:25.949 "trsvcid": "41926" 00:16:25.949 }, 00:16:25.949 "auth": { 00:16:25.949 "state": "completed", 00:16:25.949 "digest": "sha256", 00:16:25.949 "dhgroup": "ffdhe8192" 00:16:25.949 } 00:16:25.949 } 00:16:25.949 ]' 00:16:25.949 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.949 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.949 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.949 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.949 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.949 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.949 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.949 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.208 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:26.208 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:26.776 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.776 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:26.776 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.776 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.776 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.776 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:26.776 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.776 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.776 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.776 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.776 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:26.776 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.776 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.776 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.776 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.776 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.776 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.776 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.776 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.034 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.034 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.034 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.034 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.034 00:16:27.293 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.293 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.293 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.293 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.293 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.293 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.293 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.293 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.293 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.293 { 00:16:27.293 "cntlid": 49, 00:16:27.293 "qid": 0, 00:16:27.293 "state": "enabled", 00:16:27.293 "thread": "nvmf_tgt_poll_group_000", 00:16:27.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:27.293 "listen_address": { 00:16:27.293 "trtype": "TCP", 00:16:27.293 "adrfam": "IPv4", 00:16:27.293 "traddr": "10.0.0.2", 00:16:27.293 "trsvcid": "4420" 00:16:27.293 }, 00:16:27.293 "peer_address": { 00:16:27.293 "trtype": "TCP", 00:16:27.293 "adrfam": "IPv4", 00:16:27.293 "traddr": "10.0.0.1", 00:16:27.293 "trsvcid": "41962" 00:16:27.293 }, 00:16:27.293 "auth": { 00:16:27.293 "state": "completed", 00:16:27.293 "digest": "sha384", 00:16:27.293 "dhgroup": "null" 00:16:27.293 } 00:16:27.293 } 00:16:27.293 ]' 00:16:27.293 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.551 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.551 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.551 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:27.551 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.551 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.551 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.551 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.809 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:27.809 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.377 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.635 00:16:28.635 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.635 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.635 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.893 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.893 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.893 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.893 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.893 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.893 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.893 { 00:16:28.893 "cntlid": 51, 00:16:28.893 "qid": 0, 00:16:28.893 "state": "enabled", 00:16:28.893 "thread": "nvmf_tgt_poll_group_000", 00:16:28.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:28.893 "listen_address": { 00:16:28.893 "trtype": "TCP", 00:16:28.893 "adrfam": "IPv4", 00:16:28.893 "traddr": "10.0.0.2", 00:16:28.893 "trsvcid": "4420" 00:16:28.893 }, 00:16:28.893 "peer_address": { 00:16:28.893 "trtype": "TCP", 00:16:28.893 "adrfam": "IPv4", 00:16:28.893 "traddr": "10.0.0.1", 00:16:28.893 "trsvcid": "60658" 00:16:28.893 }, 00:16:28.893 "auth": { 00:16:28.893 "state": "completed", 00:16:28.893 "digest": "sha384", 00:16:28.893 "dhgroup": "null" 00:16:28.893 } 00:16:28.893 } 00:16:28.893 ]' 00:16:28.893 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.893 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.893 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.153 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:29.153 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.153 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.153 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.153 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.153 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:29.153 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:29.723 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.723 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:29.723 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.723 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.982 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.240 00:16:30.240 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.240 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.240 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.499 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.499 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.499 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.499 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.499 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.499 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.499 { 00:16:30.499 "cntlid": 53, 00:16:30.499 "qid": 0, 00:16:30.499 "state": "enabled", 00:16:30.499 "thread": "nvmf_tgt_poll_group_000", 00:16:30.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:30.499 "listen_address": { 00:16:30.499 "trtype": "TCP", 00:16:30.499 "adrfam": "IPv4", 00:16:30.499 "traddr": "10.0.0.2", 00:16:30.499 "trsvcid": "4420" 00:16:30.499 }, 00:16:30.499 "peer_address": { 00:16:30.499 "trtype": "TCP", 00:16:30.499 "adrfam": "IPv4", 00:16:30.499 "traddr": "10.0.0.1", 00:16:30.499 "trsvcid": "60682" 00:16:30.499 }, 00:16:30.499 "auth": { 00:16:30.499 "state": "completed", 00:16:30.499 "digest": "sha384", 00:16:30.499 "dhgroup": "null" 00:16:30.499 } 00:16:30.499 } 00:16:30.499 ]' 00:16:30.499 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.499 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.499 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.499 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.499 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.758 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.758 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.758 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.758 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:30.758 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:31.326 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.326 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:31.326 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.326 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.326 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.326 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.326 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.326 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.584 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.842 00:16:31.842 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.842 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.842 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.101 { 00:16:32.101 "cntlid": 55, 00:16:32.101 "qid": 0, 00:16:32.101 "state": "enabled", 00:16:32.101 "thread": "nvmf_tgt_poll_group_000", 00:16:32.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:32.101 "listen_address": { 00:16:32.101 "trtype": "TCP", 00:16:32.101 "adrfam": "IPv4", 00:16:32.101 "traddr": "10.0.0.2", 00:16:32.101 "trsvcid": "4420" 00:16:32.101 }, 00:16:32.101 "peer_address": { 00:16:32.101 "trtype": "TCP", 00:16:32.101 "adrfam": "IPv4", 00:16:32.101 "traddr": "10.0.0.1", 00:16:32.101 "trsvcid": "60700" 00:16:32.101 }, 00:16:32.101 "auth": { 00:16:32.101 "state": "completed", 00:16:32.101 "digest": "sha384", 00:16:32.101 "dhgroup": "null" 00:16:32.101 } 00:16:32.101 } 00:16:32.101 ]' 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.101 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.360 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:32.360 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:32.926 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.926 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:32.926 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.926 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.926 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.926 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.926 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.926 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.926 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.185 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.444 00:16:33.444 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.444 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.444 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.702 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.702 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.702 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.702 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.702 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.702 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.702 { 00:16:33.702 "cntlid": 57, 00:16:33.702 "qid": 0, 00:16:33.702 "state": "enabled", 00:16:33.702 "thread": "nvmf_tgt_poll_group_000", 00:16:33.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:33.702 "listen_address": { 00:16:33.702 "trtype": "TCP", 00:16:33.702 "adrfam": "IPv4", 00:16:33.702 "traddr": "10.0.0.2", 00:16:33.702 "trsvcid": "4420" 00:16:33.702 }, 00:16:33.702 "peer_address": { 00:16:33.702 "trtype": "TCP", 00:16:33.702 "adrfam": "IPv4", 00:16:33.702 "traddr": "10.0.0.1", 00:16:33.702 "trsvcid": "60716" 00:16:33.702 }, 00:16:33.702 "auth": { 00:16:33.702 "state": "completed", 00:16:33.702 "digest": "sha384", 00:16:33.702 "dhgroup": "ffdhe2048" 00:16:33.702 } 00:16:33.702 } 00:16:33.702 ]' 00:16:33.702 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.702 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.702 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.702 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.702 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.960 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.960 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.960 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.960 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:33.960 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:34.527 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.527 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:34.527 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.527 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.527 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.527 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.527 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:34.527 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.796 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.054 00:16:35.054 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.054 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.054 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.311 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.311 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.312 { 00:16:35.312 "cntlid": 59, 00:16:35.312 "qid": 0, 00:16:35.312 "state": "enabled", 00:16:35.312 "thread": "nvmf_tgt_poll_group_000", 00:16:35.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:35.312 "listen_address": { 00:16:35.312 "trtype": "TCP", 00:16:35.312 "adrfam": "IPv4", 00:16:35.312 "traddr": "10.0.0.2", 00:16:35.312 "trsvcid": "4420" 00:16:35.312 }, 00:16:35.312 "peer_address": { 00:16:35.312 "trtype": "TCP", 00:16:35.312 "adrfam": "IPv4", 00:16:35.312 "traddr": "10.0.0.1", 00:16:35.312 "trsvcid": "60742" 00:16:35.312 }, 00:16:35.312 "auth": { 00:16:35.312 "state": "completed", 00:16:35.312 "digest": "sha384", 00:16:35.312 "dhgroup": "ffdhe2048" 00:16:35.312 } 00:16:35.312 } 00:16:35.312 ]' 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.312 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.570 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:35.570 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:36.137 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.137 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:36.137 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.137 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.137 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.137 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.137 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:36.137 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.396 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.655 00:16:36.655 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.655 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.655 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.913 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.913 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.913 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.913 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.913 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.913 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.913 { 00:16:36.913 "cntlid": 61, 00:16:36.913 "qid": 0, 00:16:36.913 "state": "enabled", 00:16:36.913 "thread": "nvmf_tgt_poll_group_000", 00:16:36.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:36.913 "listen_address": { 00:16:36.913 "trtype": "TCP", 00:16:36.913 "adrfam": "IPv4", 00:16:36.913 "traddr": "10.0.0.2", 00:16:36.913 "trsvcid": "4420" 00:16:36.913 }, 00:16:36.913 "peer_address": { 00:16:36.913 "trtype": "TCP", 00:16:36.913 "adrfam": "IPv4", 00:16:36.913 "traddr": "10.0.0.1", 00:16:36.913 "trsvcid": "60780" 00:16:36.913 }, 00:16:36.913 "auth": { 00:16:36.913 "state": "completed", 00:16:36.913 "digest": "sha384", 00:16:36.913 "dhgroup": "ffdhe2048" 00:16:36.913 } 00:16:36.913 } 00:16:36.913 ]' 00:16:36.913 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.913 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.913 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.913 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.913 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.913 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.913 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.913 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.171 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:37.171 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:37.738 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.738 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:37.738 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.738 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.738 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.739 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.739 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.739 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.997 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.256 00:16:38.256 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.256 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.256 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.515 { 00:16:38.515 "cntlid": 63, 00:16:38.515 "qid": 0, 00:16:38.515 "state": "enabled", 00:16:38.515 "thread": "nvmf_tgt_poll_group_000", 00:16:38.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:38.515 "listen_address": { 00:16:38.515 "trtype": "TCP", 00:16:38.515 "adrfam": "IPv4", 00:16:38.515 "traddr": "10.0.0.2", 00:16:38.515 "trsvcid": "4420" 00:16:38.515 }, 00:16:38.515 "peer_address": { 00:16:38.515 "trtype": "TCP", 00:16:38.515 "adrfam": "IPv4", 00:16:38.515 "traddr": "10.0.0.1", 00:16:38.515 "trsvcid": "57416" 00:16:38.515 }, 00:16:38.515 "auth": { 00:16:38.515 "state": "completed", 00:16:38.515 "digest": "sha384", 00:16:38.515 "dhgroup": "ffdhe2048" 00:16:38.515 } 00:16:38.515 } 00:16:38.515 ]' 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.515 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.773 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:38.773 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:39.340 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.340 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:39.340 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.340 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.340 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.340 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.340 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.340 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:39.340 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.598 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.598 00:16:39.856 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.856 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.856 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.856 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.857 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.857 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.857 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.857 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.857 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.857 { 00:16:39.857 "cntlid": 65, 00:16:39.857 "qid": 0, 00:16:39.857 "state": "enabled", 00:16:39.857 "thread": "nvmf_tgt_poll_group_000", 00:16:39.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:39.857 "listen_address": { 00:16:39.857 "trtype": "TCP", 00:16:39.857 "adrfam": "IPv4", 00:16:39.857 "traddr": "10.0.0.2", 00:16:39.857 "trsvcid": "4420" 00:16:39.857 }, 00:16:39.857 "peer_address": { 00:16:39.857 "trtype": "TCP", 00:16:39.857 "adrfam": "IPv4", 00:16:39.857 "traddr": "10.0.0.1", 00:16:39.857 "trsvcid": "57448" 00:16:39.857 }, 00:16:39.857 "auth": { 00:16:39.857 "state": "completed", 00:16:39.857 "digest": "sha384", 00:16:39.857 "dhgroup": "ffdhe3072" 00:16:39.857 } 00:16:39.857 } 00:16:39.857 ]' 00:16:39.857 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.857 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.857 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.115 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.115 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.115 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.115 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.115 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.373 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:40.374 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:40.941 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.941 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:40.941 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.941 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.941 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.200 00:16:41.458 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.458 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.458 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.458 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.458 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.458 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.459 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.459 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.459 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.459 { 00:16:41.459 "cntlid": 67, 00:16:41.459 "qid": 0, 00:16:41.459 "state": "enabled", 00:16:41.459 "thread": "nvmf_tgt_poll_group_000", 00:16:41.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:41.459 "listen_address": { 00:16:41.459 "trtype": "TCP", 00:16:41.459 "adrfam": "IPv4", 00:16:41.459 "traddr": "10.0.0.2", 00:16:41.459 "trsvcid": "4420" 00:16:41.459 }, 00:16:41.459 "peer_address": { 00:16:41.459 "trtype": "TCP", 00:16:41.459 "adrfam": "IPv4", 00:16:41.459 "traddr": "10.0.0.1", 00:16:41.459 "trsvcid": "57488" 00:16:41.459 }, 00:16:41.459 "auth": { 00:16:41.459 "state": "completed", 00:16:41.459 "digest": "sha384", 00:16:41.459 "dhgroup": "ffdhe3072" 00:16:41.459 } 00:16:41.459 } 00:16:41.459 ]' 00:16:41.459 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.717 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.717 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.717 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.717 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.717 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.717 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.717 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.976 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:41.976 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.542 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.543 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.543 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.543 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.543 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.543 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.543 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.801 00:16:42.801 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.801 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.801 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.059 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.059 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.059 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.059 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.059 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.059 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.059 { 00:16:43.059 "cntlid": 69, 00:16:43.059 "qid": 0, 00:16:43.059 "state": "enabled", 00:16:43.059 "thread": "nvmf_tgt_poll_group_000", 00:16:43.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:43.059 "listen_address": { 00:16:43.059 "trtype": "TCP", 00:16:43.059 "adrfam": "IPv4", 00:16:43.059 "traddr": "10.0.0.2", 00:16:43.059 "trsvcid": "4420" 00:16:43.059 }, 00:16:43.059 "peer_address": { 00:16:43.059 "trtype": "TCP", 00:16:43.059 "adrfam": "IPv4", 00:16:43.059 "traddr": "10.0.0.1", 00:16:43.059 "trsvcid": "57526" 00:16:43.059 }, 00:16:43.059 "auth": { 00:16:43.059 "state": "completed", 00:16:43.059 "digest": "sha384", 00:16:43.059 "dhgroup": "ffdhe3072" 00:16:43.059 } 00:16:43.059 } 00:16:43.059 ]' 00:16:43.059 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.059 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.059 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.059 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.059 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.318 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.318 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.318 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.318 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:43.318 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:43.885 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.885 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:43.885 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.886 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.886 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.886 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.886 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.886 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.144 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.403 00:16:44.403 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.403 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.403 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.662 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.662 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.662 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.662 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.662 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.662 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.662 { 00:16:44.662 "cntlid": 71, 00:16:44.662 "qid": 0, 00:16:44.662 "state": "enabled", 00:16:44.662 "thread": "nvmf_tgt_poll_group_000", 00:16:44.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:44.662 "listen_address": { 00:16:44.662 "trtype": "TCP", 00:16:44.662 "adrfam": "IPv4", 00:16:44.662 "traddr": "10.0.0.2", 00:16:44.662 "trsvcid": "4420" 00:16:44.662 }, 00:16:44.662 "peer_address": { 00:16:44.662 "trtype": "TCP", 00:16:44.662 "adrfam": "IPv4", 00:16:44.662 "traddr": "10.0.0.1", 00:16:44.662 "trsvcid": "57552" 00:16:44.662 }, 00:16:44.662 "auth": { 00:16:44.662 "state": "completed", 00:16:44.662 "digest": "sha384", 00:16:44.662 "dhgroup": "ffdhe3072" 00:16:44.662 } 00:16:44.662 } 00:16:44.662 ]' 00:16:44.662 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.662 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.662 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.662 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.662 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.920 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.920 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.920 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.920 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:44.920 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:45.521 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.521 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:45.521 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.521 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.521 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.521 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.521 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.521 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.521 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.888 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.172 00:16:46.172 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.172 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.172 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.172 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.172 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.172 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.172 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.172 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.172 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.172 { 00:16:46.172 "cntlid": 73, 00:16:46.172 "qid": 0, 00:16:46.172 "state": "enabled", 00:16:46.172 "thread": "nvmf_tgt_poll_group_000", 00:16:46.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:46.172 "listen_address": { 00:16:46.172 "trtype": "TCP", 00:16:46.172 "adrfam": "IPv4", 00:16:46.172 "traddr": "10.0.0.2", 00:16:46.172 "trsvcid": "4420" 00:16:46.172 }, 00:16:46.172 "peer_address": { 00:16:46.172 "trtype": "TCP", 00:16:46.172 "adrfam": "IPv4", 00:16:46.172 "traddr": "10.0.0.1", 00:16:46.172 "trsvcid": "57578" 00:16:46.172 }, 00:16:46.172 "auth": { 00:16:46.172 "state": "completed", 00:16:46.172 "digest": "sha384", 00:16:46.172 "dhgroup": "ffdhe4096" 00:16:46.172 } 00:16:46.172 } 00:16:46.172 ]' 00:16:46.172 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.430 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.430 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.430 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.430 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.430 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.430 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.430 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.688 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:46.688 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.256 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.515 00:16:47.515 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.515 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.515 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.773 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.773 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.773 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.773 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.773 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.773 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.773 { 00:16:47.773 "cntlid": 75, 00:16:47.773 "qid": 0, 00:16:47.773 "state": "enabled", 00:16:47.773 "thread": "nvmf_tgt_poll_group_000", 00:16:47.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:47.773 "listen_address": { 00:16:47.773 "trtype": "TCP", 00:16:47.773 "adrfam": "IPv4", 00:16:47.773 "traddr": "10.0.0.2", 00:16:47.773 "trsvcid": "4420" 00:16:47.773 }, 00:16:47.773 "peer_address": { 00:16:47.773 "trtype": "TCP", 00:16:47.773 "adrfam": "IPv4", 00:16:47.773 "traddr": "10.0.0.1", 00:16:47.773 "trsvcid": "48272" 00:16:47.773 }, 00:16:47.773 "auth": { 00:16:47.773 "state": "completed", 00:16:47.773 "digest": "sha384", 00:16:47.773 "dhgroup": "ffdhe4096" 00:16:47.773 } 00:16:47.773 } 00:16:47.773 ]' 00:16:47.773 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.773 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.773 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.032 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.032 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.032 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.032 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.032 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.291 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:48.291 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:48.859 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.859 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:48.859 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.859 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.859 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.859 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.859 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.859 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.859 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.118 00:16:49.118 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.118 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.118 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.377 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.377 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.377 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.377 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.377 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.377 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.377 { 00:16:49.377 "cntlid": 77, 00:16:49.377 "qid": 0, 00:16:49.377 "state": "enabled", 00:16:49.377 "thread": "nvmf_tgt_poll_group_000", 00:16:49.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:49.377 "listen_address": { 00:16:49.377 "trtype": "TCP", 00:16:49.377 "adrfam": "IPv4", 00:16:49.377 "traddr": "10.0.0.2", 00:16:49.377 "trsvcid": "4420" 00:16:49.377 }, 00:16:49.377 "peer_address": { 00:16:49.377 "trtype": "TCP", 00:16:49.377 "adrfam": "IPv4", 00:16:49.377 "traddr": "10.0.0.1", 00:16:49.377 "trsvcid": "48304" 00:16:49.377 }, 00:16:49.377 "auth": { 00:16:49.377 "state": "completed", 00:16:49.377 "digest": "sha384", 00:16:49.377 "dhgroup": "ffdhe4096" 00:16:49.377 } 00:16:49.377 } 00:16:49.377 ]' 00:16:49.377 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.377 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.377 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.636 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.636 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.636 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.636 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.636 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.895 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:49.895 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:50.462 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.462 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:50.462 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.462 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.462 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.462 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.462 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.463 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.722 00:16:50.980 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.980 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.980 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.980 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.980 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.980 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.980 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.980 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.980 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.980 { 00:16:50.980 "cntlid": 79, 00:16:50.980 "qid": 0, 00:16:50.980 "state": "enabled", 00:16:50.980 "thread": "nvmf_tgt_poll_group_000", 00:16:50.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:50.980 "listen_address": { 00:16:50.980 "trtype": "TCP", 00:16:50.980 "adrfam": "IPv4", 00:16:50.980 "traddr": "10.0.0.2", 00:16:50.980 "trsvcid": "4420" 00:16:50.980 }, 00:16:50.980 "peer_address": { 00:16:50.980 "trtype": "TCP", 00:16:50.980 "adrfam": "IPv4", 00:16:50.980 "traddr": "10.0.0.1", 00:16:50.980 "trsvcid": "48324" 00:16:50.980 }, 00:16:50.980 "auth": { 00:16:50.980 "state": "completed", 00:16:50.980 "digest": "sha384", 00:16:50.980 "dhgroup": "ffdhe4096" 00:16:50.980 } 00:16:50.980 } 00:16:50.980 ]' 00:16:50.980 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.239 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.239 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.239 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:51.239 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.239 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.239 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.239 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.499 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:51.499 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:52.065 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.065 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:52.065 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.065 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.065 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.065 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.065 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.066 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:52.066 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:52.066 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:52.066 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.066 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.066 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.066 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.066 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.066 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.066 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.066 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.324 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.324 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.324 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.324 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.583 00:16:52.583 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.583 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.583 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.842 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.842 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.842 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.842 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.842 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.842 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.842 { 00:16:52.842 "cntlid": 81, 00:16:52.842 "qid": 0, 00:16:52.842 "state": "enabled", 00:16:52.842 "thread": "nvmf_tgt_poll_group_000", 00:16:52.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:52.842 "listen_address": { 00:16:52.842 "trtype": "TCP", 00:16:52.842 "adrfam": "IPv4", 00:16:52.842 "traddr": "10.0.0.2", 00:16:52.842 "trsvcid": "4420" 00:16:52.842 }, 00:16:52.842 "peer_address": { 00:16:52.842 "trtype": "TCP", 00:16:52.842 "adrfam": "IPv4", 00:16:52.842 "traddr": "10.0.0.1", 00:16:52.842 "trsvcid": "48362" 00:16:52.842 }, 00:16:52.842 "auth": { 00:16:52.842 "state": "completed", 00:16:52.842 "digest": "sha384", 00:16:52.842 "dhgroup": "ffdhe6144" 00:16:52.842 } 00:16:52.842 } 00:16:52.842 ]' 00:16:52.842 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.842 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.842 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.842 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.842 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.842 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.842 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.842 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.100 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:53.101 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:53.667 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.667 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:53.667 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.667 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.667 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.667 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.667 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.667 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.926 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.183 00:16:54.183 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.183 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.183 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.441 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.441 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.441 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.441 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.441 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.441 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.441 { 00:16:54.441 "cntlid": 83, 00:16:54.441 "qid": 0, 00:16:54.441 "state": "enabled", 00:16:54.441 "thread": "nvmf_tgt_poll_group_000", 00:16:54.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:54.441 "listen_address": { 00:16:54.441 "trtype": "TCP", 00:16:54.441 "adrfam": "IPv4", 00:16:54.441 "traddr": "10.0.0.2", 00:16:54.441 "trsvcid": "4420" 00:16:54.441 }, 00:16:54.441 "peer_address": { 00:16:54.441 "trtype": "TCP", 00:16:54.441 "adrfam": "IPv4", 00:16:54.441 "traddr": "10.0.0.1", 00:16:54.441 "trsvcid": "48390" 00:16:54.441 }, 00:16:54.441 "auth": { 00:16:54.441 "state": "completed", 00:16:54.441 "digest": "sha384", 00:16:54.441 "dhgroup": "ffdhe6144" 00:16:54.441 } 00:16:54.441 } 00:16:54.441 ]' 00:16:54.441 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.441 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.441 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.441 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.699 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.699 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.699 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.699 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.699 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:54.699 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:16:55.266 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.266 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:55.266 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.266 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.266 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.266 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.266 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.266 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.526 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:55.526 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.526 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.526 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.527 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:55.527 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.527 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.527 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.527 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.527 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.527 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.527 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.527 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.094 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.094 { 00:16:56.094 "cntlid": 85, 00:16:56.094 "qid": 0, 00:16:56.094 "state": "enabled", 00:16:56.094 "thread": "nvmf_tgt_poll_group_000", 00:16:56.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:56.094 "listen_address": { 00:16:56.094 "trtype": "TCP", 00:16:56.094 "adrfam": "IPv4", 00:16:56.094 "traddr": "10.0.0.2", 00:16:56.094 "trsvcid": "4420" 00:16:56.094 }, 00:16:56.094 "peer_address": { 00:16:56.094 "trtype": "TCP", 00:16:56.094 "adrfam": "IPv4", 00:16:56.094 "traddr": "10.0.0.1", 00:16:56.094 "trsvcid": "48402" 00:16:56.094 }, 00:16:56.094 "auth": { 00:16:56.094 "state": "completed", 00:16:56.094 "digest": "sha384", 00:16:56.094 "dhgroup": "ffdhe6144" 00:16:56.094 } 00:16:56.094 } 00:16:56.094 ]' 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.094 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.352 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.352 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.352 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.352 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.352 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.612 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:56.612 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:16:57.179 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.179 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:57.179 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.179 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.179 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.179 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.179 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.179 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.179 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:57.179 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.180 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.180 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:57.180 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:57.180 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.180 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:57.180 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.180 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.180 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.180 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:57.180 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.180 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.747 00:16:57.747 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.747 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.747 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.747 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.747 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.747 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.747 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.747 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.747 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.747 { 00:16:57.747 "cntlid": 87, 00:16:57.747 "qid": 0, 00:16:57.747 "state": "enabled", 00:16:57.747 "thread": "nvmf_tgt_poll_group_000", 00:16:57.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:57.747 "listen_address": { 00:16:57.747 "trtype": "TCP", 00:16:57.747 "adrfam": "IPv4", 00:16:57.747 "traddr": "10.0.0.2", 00:16:57.747 "trsvcid": "4420" 00:16:57.747 }, 00:16:57.747 "peer_address": { 00:16:57.747 "trtype": "TCP", 00:16:57.747 "adrfam": "IPv4", 00:16:57.747 "traddr": "10.0.0.1", 00:16:57.747 "trsvcid": "48430" 00:16:57.747 }, 00:16:57.747 "auth": { 00:16:57.747 "state": "completed", 00:16:57.747 "digest": "sha384", 00:16:57.747 "dhgroup": "ffdhe6144" 00:16:57.747 } 00:16:57.747 } 00:16:57.747 ]' 00:16:57.747 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.006 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.006 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.006 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.006 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.006 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.006 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.006 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.264 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:58.264 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:16:58.831 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.831 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:58.831 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.832 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.832 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.832 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.832 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.832 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.832 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.832 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.398 00:16:59.398 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.398 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.398 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.657 { 00:16:59.657 "cntlid": 89, 00:16:59.657 "qid": 0, 00:16:59.657 "state": "enabled", 00:16:59.657 "thread": "nvmf_tgt_poll_group_000", 00:16:59.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:59.657 "listen_address": { 00:16:59.657 "trtype": "TCP", 00:16:59.657 "adrfam": "IPv4", 00:16:59.657 "traddr": "10.0.0.2", 00:16:59.657 "trsvcid": "4420" 00:16:59.657 }, 00:16:59.657 "peer_address": { 00:16:59.657 "trtype": "TCP", 00:16:59.657 "adrfam": "IPv4", 00:16:59.657 "traddr": "10.0.0.1", 00:16:59.657 "trsvcid": "56870" 00:16:59.657 }, 00:16:59.657 "auth": { 00:16:59.657 "state": "completed", 00:16:59.657 "digest": "sha384", 00:16:59.657 "dhgroup": "ffdhe8192" 00:16:59.657 } 00:16:59.657 } 00:16:59.657 ]' 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.657 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.916 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:16:59.916 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:00.483 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.483 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:00.483 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.483 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.483 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.483 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.483 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.483 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.742 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.311 00:17:01.311 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.311 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.311 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.569 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.570 { 00:17:01.570 "cntlid": 91, 00:17:01.570 "qid": 0, 00:17:01.570 "state": "enabled", 00:17:01.570 "thread": "nvmf_tgt_poll_group_000", 00:17:01.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:01.570 "listen_address": { 00:17:01.570 "trtype": "TCP", 00:17:01.570 "adrfam": "IPv4", 00:17:01.570 "traddr": "10.0.0.2", 00:17:01.570 "trsvcid": "4420" 00:17:01.570 }, 00:17:01.570 "peer_address": { 00:17:01.570 "trtype": "TCP", 00:17:01.570 "adrfam": "IPv4", 00:17:01.570 "traddr": "10.0.0.1", 00:17:01.570 "trsvcid": "56898" 00:17:01.570 }, 00:17:01.570 "auth": { 00:17:01.570 "state": "completed", 00:17:01.570 "digest": "sha384", 00:17:01.570 "dhgroup": "ffdhe8192" 00:17:01.570 } 00:17:01.570 } 00:17:01.570 ]' 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.570 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.828 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:01.828 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:02.395 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.395 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:02.395 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.395 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.395 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.395 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.395 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.395 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.654 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.222 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.222 { 00:17:03.222 "cntlid": 93, 00:17:03.222 "qid": 0, 00:17:03.222 "state": "enabled", 00:17:03.222 "thread": "nvmf_tgt_poll_group_000", 00:17:03.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:03.222 "listen_address": { 00:17:03.222 "trtype": "TCP", 00:17:03.222 "adrfam": "IPv4", 00:17:03.222 "traddr": "10.0.0.2", 00:17:03.222 "trsvcid": "4420" 00:17:03.222 }, 00:17:03.222 "peer_address": { 00:17:03.222 "trtype": "TCP", 00:17:03.222 "adrfam": "IPv4", 00:17:03.222 "traddr": "10.0.0.1", 00:17:03.222 "trsvcid": "56920" 00:17:03.222 }, 00:17:03.222 "auth": { 00:17:03.222 "state": "completed", 00:17:03.222 "digest": "sha384", 00:17:03.222 "dhgroup": "ffdhe8192" 00:17:03.222 } 00:17:03.222 } 00:17:03.222 ]' 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.222 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.481 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.481 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.481 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.481 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.481 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.481 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:03.481 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:04.048 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.048 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:04.048 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.048 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.048 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.307 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.874 00:17:04.874 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.874 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.874 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.133 { 00:17:05.133 "cntlid": 95, 00:17:05.133 "qid": 0, 00:17:05.133 "state": "enabled", 00:17:05.133 "thread": "nvmf_tgt_poll_group_000", 00:17:05.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:05.133 "listen_address": { 00:17:05.133 "trtype": "TCP", 00:17:05.133 "adrfam": "IPv4", 00:17:05.133 "traddr": "10.0.0.2", 00:17:05.133 "trsvcid": "4420" 00:17:05.133 }, 00:17:05.133 "peer_address": { 00:17:05.133 "trtype": "TCP", 00:17:05.133 "adrfam": "IPv4", 00:17:05.133 "traddr": "10.0.0.1", 00:17:05.133 "trsvcid": "56932" 00:17:05.133 }, 00:17:05.133 "auth": { 00:17:05.133 "state": "completed", 00:17:05.133 "digest": "sha384", 00:17:05.133 "dhgroup": "ffdhe8192" 00:17:05.133 } 00:17:05.133 } 00:17:05.133 ]' 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.133 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.391 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:05.391 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:05.959 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.959 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:05.959 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.959 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.959 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.959 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:05.959 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.959 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.959 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.959 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.218 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.477 00:17:06.477 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.477 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.477 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.735 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.735 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.735 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.735 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.736 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.736 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.736 { 00:17:06.736 "cntlid": 97, 00:17:06.736 "qid": 0, 00:17:06.736 "state": "enabled", 00:17:06.736 "thread": "nvmf_tgt_poll_group_000", 00:17:06.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:06.736 "listen_address": { 00:17:06.736 "trtype": "TCP", 00:17:06.736 "adrfam": "IPv4", 00:17:06.736 "traddr": "10.0.0.2", 00:17:06.736 "trsvcid": "4420" 00:17:06.736 }, 00:17:06.736 "peer_address": { 00:17:06.736 "trtype": "TCP", 00:17:06.736 "adrfam": "IPv4", 00:17:06.736 "traddr": "10.0.0.1", 00:17:06.736 "trsvcid": "56950" 00:17:06.736 }, 00:17:06.736 "auth": { 00:17:06.736 "state": "completed", 00:17:06.736 "digest": "sha512", 00:17:06.736 "dhgroup": "null" 00:17:06.736 } 00:17:06.736 } 00:17:06.736 ]' 00:17:06.736 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.736 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.736 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.736 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.736 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.736 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.736 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.736 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.994 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:06.994 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:07.562 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.562 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:07.562 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.562 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.562 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.562 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.562 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.562 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.821 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.080 00:17:08.080 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.080 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.080 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.339 { 00:17:08.339 "cntlid": 99, 00:17:08.339 "qid": 0, 00:17:08.339 "state": "enabled", 00:17:08.339 "thread": "nvmf_tgt_poll_group_000", 00:17:08.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:08.339 "listen_address": { 00:17:08.339 "trtype": "TCP", 00:17:08.339 "adrfam": "IPv4", 00:17:08.339 "traddr": "10.0.0.2", 00:17:08.339 "trsvcid": "4420" 00:17:08.339 }, 00:17:08.339 "peer_address": { 00:17:08.339 "trtype": "TCP", 00:17:08.339 "adrfam": "IPv4", 00:17:08.339 "traddr": "10.0.0.1", 00:17:08.339 "trsvcid": "46042" 00:17:08.339 }, 00:17:08.339 "auth": { 00:17:08.339 "state": "completed", 00:17:08.339 "digest": "sha512", 00:17:08.339 "dhgroup": "null" 00:17:08.339 } 00:17:08.339 } 00:17:08.339 ]' 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.339 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.598 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:08.598 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:09.165 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.165 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:09.165 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.165 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.165 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.165 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.165 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.165 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.424 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.683 00:17:09.683 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.683 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.683 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.683 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.683 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.683 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.683 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.683 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.683 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.683 { 00:17:09.683 "cntlid": 101, 00:17:09.683 "qid": 0, 00:17:09.683 "state": "enabled", 00:17:09.683 "thread": "nvmf_tgt_poll_group_000", 00:17:09.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:09.683 "listen_address": { 00:17:09.683 "trtype": "TCP", 00:17:09.683 "adrfam": "IPv4", 00:17:09.683 "traddr": "10.0.0.2", 00:17:09.683 "trsvcid": "4420" 00:17:09.683 }, 00:17:09.683 "peer_address": { 00:17:09.683 "trtype": "TCP", 00:17:09.683 "adrfam": "IPv4", 00:17:09.683 "traddr": "10.0.0.1", 00:17:09.683 "trsvcid": "46078" 00:17:09.683 }, 00:17:09.683 "auth": { 00:17:09.683 "state": "completed", 00:17:09.683 "digest": "sha512", 00:17:09.684 "dhgroup": "null" 00:17:09.684 } 00:17:09.684 } 00:17:09.684 ]' 00:17:09.942 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.942 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.942 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.942 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:09.942 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.942 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.942 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.942 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.201 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:10.201 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:10.768 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.768 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:10.768 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.768 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.768 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.768 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.768 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:10.768 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.027 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.286 00:17:11.286 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.286 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.286 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.286 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.286 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.286 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.286 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.544 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.544 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.544 { 00:17:11.544 "cntlid": 103, 00:17:11.544 "qid": 0, 00:17:11.544 "state": "enabled", 00:17:11.544 "thread": "nvmf_tgt_poll_group_000", 00:17:11.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:11.544 "listen_address": { 00:17:11.544 "trtype": "TCP", 00:17:11.544 "adrfam": "IPv4", 00:17:11.544 "traddr": "10.0.0.2", 00:17:11.544 "trsvcid": "4420" 00:17:11.544 }, 00:17:11.544 "peer_address": { 00:17:11.544 "trtype": "TCP", 00:17:11.544 "adrfam": "IPv4", 00:17:11.544 "traddr": "10.0.0.1", 00:17:11.544 "trsvcid": "46094" 00:17:11.544 }, 00:17:11.544 "auth": { 00:17:11.544 "state": "completed", 00:17:11.544 "digest": "sha512", 00:17:11.544 "dhgroup": "null" 00:17:11.544 } 00:17:11.544 } 00:17:11.544 ]' 00:17:11.544 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.544 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.544 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.544 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.544 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.544 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.544 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.544 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.803 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:11.803 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:12.370 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.370 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:12.370 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.370 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.370 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.370 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.370 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.370 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.370 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.629 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.629 00:17:12.888 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.888 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.888 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.888 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.888 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.888 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.888 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.888 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.888 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.888 { 00:17:12.888 "cntlid": 105, 00:17:12.888 "qid": 0, 00:17:12.888 "state": "enabled", 00:17:12.888 "thread": "nvmf_tgt_poll_group_000", 00:17:12.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:12.888 "listen_address": { 00:17:12.888 "trtype": "TCP", 00:17:12.888 "adrfam": "IPv4", 00:17:12.888 "traddr": "10.0.0.2", 00:17:12.888 "trsvcid": "4420" 00:17:12.888 }, 00:17:12.888 "peer_address": { 00:17:12.888 "trtype": "TCP", 00:17:12.888 "adrfam": "IPv4", 00:17:12.888 "traddr": "10.0.0.1", 00:17:12.888 "trsvcid": "46124" 00:17:12.888 }, 00:17:12.888 "auth": { 00:17:12.888 "state": "completed", 00:17:12.888 "digest": "sha512", 00:17:12.888 "dhgroup": "ffdhe2048" 00:17:12.888 } 00:17:12.888 } 00:17:12.888 ]' 00:17:12.888 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.147 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.147 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.147 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.147 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.147 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.147 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.147 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.405 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:13.405 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:13.972 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.972 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:13.972 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.972 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.972 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.972 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.972 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.973 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.231 00:17:14.231 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.231 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.231 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.490 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.490 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.490 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.490 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.490 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.490 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.490 { 00:17:14.490 "cntlid": 107, 00:17:14.490 "qid": 0, 00:17:14.490 "state": "enabled", 00:17:14.490 "thread": "nvmf_tgt_poll_group_000", 00:17:14.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:14.490 "listen_address": { 00:17:14.490 "trtype": "TCP", 00:17:14.490 "adrfam": "IPv4", 00:17:14.490 "traddr": "10.0.0.2", 00:17:14.490 "trsvcid": "4420" 00:17:14.490 }, 00:17:14.490 "peer_address": { 00:17:14.490 "trtype": "TCP", 00:17:14.490 "adrfam": "IPv4", 00:17:14.490 "traddr": "10.0.0.1", 00:17:14.490 "trsvcid": "46142" 00:17:14.490 }, 00:17:14.490 "auth": { 00:17:14.490 "state": "completed", 00:17:14.490 "digest": "sha512", 00:17:14.490 "dhgroup": "ffdhe2048" 00:17:14.490 } 00:17:14.490 } 00:17:14.490 ]' 00:17:14.490 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.490 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.490 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.756 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.756 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.756 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.756 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.756 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:15.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.581 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.839 00:17:15.839 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.839 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.839 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.099 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.099 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.099 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.099 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.099 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.099 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.099 { 00:17:16.099 "cntlid": 109, 00:17:16.099 "qid": 0, 00:17:16.099 "state": "enabled", 00:17:16.099 "thread": "nvmf_tgt_poll_group_000", 00:17:16.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:16.099 "listen_address": { 00:17:16.099 "trtype": "TCP", 00:17:16.099 "adrfam": "IPv4", 00:17:16.099 "traddr": "10.0.0.2", 00:17:16.099 "trsvcid": "4420" 00:17:16.099 }, 00:17:16.099 "peer_address": { 00:17:16.099 "trtype": "TCP", 00:17:16.099 "adrfam": "IPv4", 00:17:16.099 "traddr": "10.0.0.1", 00:17:16.099 "trsvcid": "46174" 00:17:16.099 }, 00:17:16.099 "auth": { 00:17:16.099 "state": "completed", 00:17:16.099 "digest": "sha512", 00:17:16.099 "dhgroup": "ffdhe2048" 00:17:16.099 } 00:17:16.099 } 00:17:16.099 ]' 00:17:16.099 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.099 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.099 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.358 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.358 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.358 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.358 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.358 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.616 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:16.616 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:17.182 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.183 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:17.183 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.183 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.183 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.183 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:17.183 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.183 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.441 00:17:17.441 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.441 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.441 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.699 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.699 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.699 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.699 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.699 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.699 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.699 { 00:17:17.699 "cntlid": 111, 00:17:17.699 "qid": 0, 00:17:17.699 "state": "enabled", 00:17:17.699 "thread": "nvmf_tgt_poll_group_000", 00:17:17.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:17.699 "listen_address": { 00:17:17.699 "trtype": "TCP", 00:17:17.699 "adrfam": "IPv4", 00:17:17.699 "traddr": "10.0.0.2", 00:17:17.699 "trsvcid": "4420" 00:17:17.699 }, 00:17:17.699 "peer_address": { 00:17:17.699 "trtype": "TCP", 00:17:17.699 "adrfam": "IPv4", 00:17:17.699 "traddr": "10.0.0.1", 00:17:17.699 "trsvcid": "46200" 00:17:17.699 }, 00:17:17.699 "auth": { 00:17:17.699 "state": "completed", 00:17:17.699 "digest": "sha512", 00:17:17.699 "dhgroup": "ffdhe2048" 00:17:17.699 } 00:17:17.699 } 00:17:17.699 ]' 00:17:17.699 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.699 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.699 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.957 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:17.957 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.957 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.957 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.957 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.957 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:17.957 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:18.554 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.554 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:18.554 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.554 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.554 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.554 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.554 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.554 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.554 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.812 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.070 00:17:19.070 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.070 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.070 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.327 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.327 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.327 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.327 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.327 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.327 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.327 { 00:17:19.327 "cntlid": 113, 00:17:19.327 "qid": 0, 00:17:19.327 "state": "enabled", 00:17:19.327 "thread": "nvmf_tgt_poll_group_000", 00:17:19.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:19.327 "listen_address": { 00:17:19.327 "trtype": "TCP", 00:17:19.327 "adrfam": "IPv4", 00:17:19.327 "traddr": "10.0.0.2", 00:17:19.327 "trsvcid": "4420" 00:17:19.327 }, 00:17:19.327 "peer_address": { 00:17:19.327 "trtype": "TCP", 00:17:19.327 "adrfam": "IPv4", 00:17:19.327 "traddr": "10.0.0.1", 00:17:19.327 "trsvcid": "34834" 00:17:19.327 }, 00:17:19.327 "auth": { 00:17:19.327 "state": "completed", 00:17:19.327 "digest": "sha512", 00:17:19.327 "dhgroup": "ffdhe3072" 00:17:19.327 } 00:17:19.327 } 00:17:19.327 ]' 00:17:19.327 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.327 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.327 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.327 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.328 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.585 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.585 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.585 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.585 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:19.585 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:20.152 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.152 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:20.152 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.152 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.152 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.152 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.152 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.152 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.411 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.670 00:17:20.670 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.670 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.670 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.929 { 00:17:20.929 "cntlid": 115, 00:17:20.929 "qid": 0, 00:17:20.929 "state": "enabled", 00:17:20.929 "thread": "nvmf_tgt_poll_group_000", 00:17:20.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:20.929 "listen_address": { 00:17:20.929 "trtype": "TCP", 00:17:20.929 "adrfam": "IPv4", 00:17:20.929 "traddr": "10.0.0.2", 00:17:20.929 "trsvcid": "4420" 00:17:20.929 }, 00:17:20.929 "peer_address": { 00:17:20.929 "trtype": "TCP", 00:17:20.929 "adrfam": "IPv4", 00:17:20.929 "traddr": "10.0.0.1", 00:17:20.929 "trsvcid": "34852" 00:17:20.929 }, 00:17:20.929 "auth": { 00:17:20.929 "state": "completed", 00:17:20.929 "digest": "sha512", 00:17:20.929 "dhgroup": "ffdhe3072" 00:17:20.929 } 00:17:20.929 } 00:17:20.929 ]' 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.929 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.187 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:21.187 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:21.754 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.754 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:21.754 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.754 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.754 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.754 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.754 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.754 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.013 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.272 00:17:22.272 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.272 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.272 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.530 { 00:17:22.530 "cntlid": 117, 00:17:22.530 "qid": 0, 00:17:22.530 "state": "enabled", 00:17:22.530 "thread": "nvmf_tgt_poll_group_000", 00:17:22.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:22.530 "listen_address": { 00:17:22.530 "trtype": "TCP", 00:17:22.530 "adrfam": "IPv4", 00:17:22.530 "traddr": "10.0.0.2", 00:17:22.530 "trsvcid": "4420" 00:17:22.530 }, 00:17:22.530 "peer_address": { 00:17:22.530 "trtype": "TCP", 00:17:22.530 "adrfam": "IPv4", 00:17:22.530 "traddr": "10.0.0.1", 00:17:22.530 "trsvcid": "34880" 00:17:22.530 }, 00:17:22.530 "auth": { 00:17:22.530 "state": "completed", 00:17:22.530 "digest": "sha512", 00:17:22.530 "dhgroup": "ffdhe3072" 00:17:22.530 } 00:17:22.530 } 00:17:22.530 ]' 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.530 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.789 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:22.789 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:23.369 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.369 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:23.369 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.369 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.369 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.369 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.369 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.369 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.713 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.023 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.023 { 00:17:24.023 "cntlid": 119, 00:17:24.023 "qid": 0, 00:17:24.023 "state": "enabled", 00:17:24.023 "thread": "nvmf_tgt_poll_group_000", 00:17:24.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:24.023 "listen_address": { 00:17:24.023 "trtype": "TCP", 00:17:24.023 "adrfam": "IPv4", 00:17:24.023 "traddr": "10.0.0.2", 00:17:24.023 "trsvcid": "4420" 00:17:24.023 }, 00:17:24.023 "peer_address": { 00:17:24.023 "trtype": "TCP", 00:17:24.023 "adrfam": "IPv4", 00:17:24.023 "traddr": "10.0.0.1", 00:17:24.023 "trsvcid": "34898" 00:17:24.023 }, 00:17:24.023 "auth": { 00:17:24.023 "state": "completed", 00:17:24.023 "digest": "sha512", 00:17:24.023 "dhgroup": "ffdhe3072" 00:17:24.023 } 00:17:24.023 } 00:17:24.023 ]' 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.023 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.282 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.282 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.282 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.282 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:24.282 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:24.849 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.849 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:24.849 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.849 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.849 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.849 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.849 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.849 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.849 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.108 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.367 00:17:25.367 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.367 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.367 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.626 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.626 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.626 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.626 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.626 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.626 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.626 { 00:17:25.626 "cntlid": 121, 00:17:25.626 "qid": 0, 00:17:25.626 "state": "enabled", 00:17:25.626 "thread": "nvmf_tgt_poll_group_000", 00:17:25.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:25.626 "listen_address": { 00:17:25.626 "trtype": "TCP", 00:17:25.626 "adrfam": "IPv4", 00:17:25.626 "traddr": "10.0.0.2", 00:17:25.626 "trsvcid": "4420" 00:17:25.626 }, 00:17:25.626 "peer_address": { 00:17:25.626 "trtype": "TCP", 00:17:25.626 "adrfam": "IPv4", 00:17:25.626 "traddr": "10.0.0.1", 00:17:25.626 "trsvcid": "34924" 00:17:25.626 }, 00:17:25.626 "auth": { 00:17:25.626 "state": "completed", 00:17:25.626 "digest": "sha512", 00:17:25.626 "dhgroup": "ffdhe4096" 00:17:25.626 } 00:17:25.626 } 00:17:25.626 ]' 00:17:25.626 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.626 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.626 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.626 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.626 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.885 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.886 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.886 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.886 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:25.886 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:26.453 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.453 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:26.453 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.453 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.453 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.453 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.453 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.453 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.712 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.971 00:17:26.971 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.971 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.971 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.230 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.230 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.230 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.230 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.230 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.230 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.230 { 00:17:27.230 "cntlid": 123, 00:17:27.230 "qid": 0, 00:17:27.230 "state": "enabled", 00:17:27.230 "thread": "nvmf_tgt_poll_group_000", 00:17:27.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:27.230 "listen_address": { 00:17:27.230 "trtype": "TCP", 00:17:27.230 "adrfam": "IPv4", 00:17:27.230 "traddr": "10.0.0.2", 00:17:27.230 "trsvcid": "4420" 00:17:27.230 }, 00:17:27.230 "peer_address": { 00:17:27.230 "trtype": "TCP", 00:17:27.230 "adrfam": "IPv4", 00:17:27.230 "traddr": "10.0.0.1", 00:17:27.230 "trsvcid": "34956" 00:17:27.230 }, 00:17:27.230 "auth": { 00:17:27.230 "state": "completed", 00:17:27.230 "digest": "sha512", 00:17:27.230 "dhgroup": "ffdhe4096" 00:17:27.230 } 00:17:27.230 } 00:17:27.230 ]' 00:17:27.230 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.230 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.230 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.230 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.230 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.490 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.490 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.490 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.490 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:27.490 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:28.057 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.057 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:28.057 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.057 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.057 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.057 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.057 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.057 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.316 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.575 00:17:28.575 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.575 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.575 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.834 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.834 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.834 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.834 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.834 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.834 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.834 { 00:17:28.834 "cntlid": 125, 00:17:28.834 "qid": 0, 00:17:28.834 "state": "enabled", 00:17:28.834 "thread": "nvmf_tgt_poll_group_000", 00:17:28.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:28.834 "listen_address": { 00:17:28.834 "trtype": "TCP", 00:17:28.834 "adrfam": "IPv4", 00:17:28.834 "traddr": "10.0.0.2", 00:17:28.834 "trsvcid": "4420" 00:17:28.834 }, 00:17:28.834 "peer_address": { 00:17:28.834 "trtype": "TCP", 00:17:28.834 "adrfam": "IPv4", 00:17:28.834 "traddr": "10.0.0.1", 00:17:28.834 "trsvcid": "46978" 00:17:28.834 }, 00:17:28.834 "auth": { 00:17:28.834 "state": "completed", 00:17:28.834 "digest": "sha512", 00:17:28.834 "dhgroup": "ffdhe4096" 00:17:28.834 } 00:17:28.834 } 00:17:28.834 ]' 00:17:28.834 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.834 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.834 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.093 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:29.093 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.093 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.093 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.093 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.093 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:29.093 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:29.660 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.660 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:29.660 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.660 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.660 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.660 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.660 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.660 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.919 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.178 00:17:30.178 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.178 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.178 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.437 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.437 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.437 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.437 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.437 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.437 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.437 { 00:17:30.437 "cntlid": 127, 00:17:30.437 "qid": 0, 00:17:30.437 "state": "enabled", 00:17:30.437 "thread": "nvmf_tgt_poll_group_000", 00:17:30.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:30.437 "listen_address": { 00:17:30.437 "trtype": "TCP", 00:17:30.437 "adrfam": "IPv4", 00:17:30.437 "traddr": "10.0.0.2", 00:17:30.437 "trsvcid": "4420" 00:17:30.437 }, 00:17:30.437 "peer_address": { 00:17:30.437 "trtype": "TCP", 00:17:30.437 "adrfam": "IPv4", 00:17:30.437 "traddr": "10.0.0.1", 00:17:30.437 "trsvcid": "46994" 00:17:30.437 }, 00:17:30.437 "auth": { 00:17:30.437 "state": "completed", 00:17:30.437 "digest": "sha512", 00:17:30.437 "dhgroup": "ffdhe4096" 00:17:30.437 } 00:17:30.437 } 00:17:30.437 ]' 00:17:30.437 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.437 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.437 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.437 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.437 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.696 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.696 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.696 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.696 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:30.696 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:31.264 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.264 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:31.264 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.264 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.264 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.264 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.264 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.264 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.264 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.523 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.091 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.091 { 00:17:32.091 "cntlid": 129, 00:17:32.091 "qid": 0, 00:17:32.091 "state": "enabled", 00:17:32.091 "thread": "nvmf_tgt_poll_group_000", 00:17:32.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:32.091 "listen_address": { 00:17:32.091 "trtype": "TCP", 00:17:32.091 "adrfam": "IPv4", 00:17:32.091 "traddr": "10.0.0.2", 00:17:32.091 "trsvcid": "4420" 00:17:32.091 }, 00:17:32.091 "peer_address": { 00:17:32.091 "trtype": "TCP", 00:17:32.091 "adrfam": "IPv4", 00:17:32.091 "traddr": "10.0.0.1", 00:17:32.091 "trsvcid": "47024" 00:17:32.091 }, 00:17:32.091 "auth": { 00:17:32.091 "state": "completed", 00:17:32.091 "digest": "sha512", 00:17:32.091 "dhgroup": "ffdhe6144" 00:17:32.091 } 00:17:32.091 } 00:17:32.091 ]' 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.091 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.350 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.350 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.350 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.350 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.350 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.609 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:32.610 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.178 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.746 00:17:33.746 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.746 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.746 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.746 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.746 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.746 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.746 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.746 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.746 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.746 { 00:17:33.746 "cntlid": 131, 00:17:33.746 "qid": 0, 00:17:33.746 "state": "enabled", 00:17:33.746 "thread": "nvmf_tgt_poll_group_000", 00:17:33.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:33.746 "listen_address": { 00:17:33.746 "trtype": "TCP", 00:17:33.746 "adrfam": "IPv4", 00:17:33.746 "traddr": "10.0.0.2", 00:17:33.746 "trsvcid": "4420" 00:17:33.746 }, 00:17:33.746 "peer_address": { 00:17:33.746 "trtype": "TCP", 00:17:33.746 "adrfam": "IPv4", 00:17:33.746 "traddr": "10.0.0.1", 00:17:33.746 "trsvcid": "47038" 00:17:33.746 }, 00:17:33.746 "auth": { 00:17:33.746 "state": "completed", 00:17:33.746 "digest": "sha512", 00:17:33.746 "dhgroup": "ffdhe6144" 00:17:33.746 } 00:17:33.746 } 00:17:33.746 ]' 00:17:33.746 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.746 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.746 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.005 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.005 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.005 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.005 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.005 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.263 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:34.264 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:34.832 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.832 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:34.832 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.832 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.832 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.832 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.832 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.832 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.832 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.400 00:17:35.400 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.400 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.400 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.400 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.400 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.400 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.400 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.400 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.400 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.400 { 00:17:35.400 "cntlid": 133, 00:17:35.400 "qid": 0, 00:17:35.400 "state": "enabled", 00:17:35.400 "thread": "nvmf_tgt_poll_group_000", 00:17:35.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:35.400 "listen_address": { 00:17:35.400 "trtype": "TCP", 00:17:35.400 "adrfam": "IPv4", 00:17:35.400 "traddr": "10.0.0.2", 00:17:35.400 "trsvcid": "4420" 00:17:35.400 }, 00:17:35.400 "peer_address": { 00:17:35.400 "trtype": "TCP", 00:17:35.400 "adrfam": "IPv4", 00:17:35.400 "traddr": "10.0.0.1", 00:17:35.400 "trsvcid": "47062" 00:17:35.400 }, 00:17:35.400 "auth": { 00:17:35.400 "state": "completed", 00:17:35.400 "digest": "sha512", 00:17:35.400 "dhgroup": "ffdhe6144" 00:17:35.400 } 00:17:35.400 } 00:17:35.400 ]' 00:17:35.400 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.659 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.659 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.659 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.659 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.659 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.659 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.659 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.921 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:35.921 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:36.488 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.488 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:36.488 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.488 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.489 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.056 00:17:37.056 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.056 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.056 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.056 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.056 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.056 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.056 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.056 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.056 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.056 { 00:17:37.056 "cntlid": 135, 00:17:37.056 "qid": 0, 00:17:37.056 "state": "enabled", 00:17:37.056 "thread": "nvmf_tgt_poll_group_000", 00:17:37.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:37.056 "listen_address": { 00:17:37.056 "trtype": "TCP", 00:17:37.056 "adrfam": "IPv4", 00:17:37.056 "traddr": "10.0.0.2", 00:17:37.056 "trsvcid": "4420" 00:17:37.056 }, 00:17:37.056 "peer_address": { 00:17:37.056 "trtype": "TCP", 00:17:37.056 "adrfam": "IPv4", 00:17:37.056 "traddr": "10.0.0.1", 00:17:37.056 "trsvcid": "47098" 00:17:37.056 }, 00:17:37.056 "auth": { 00:17:37.056 "state": "completed", 00:17:37.056 "digest": "sha512", 00:17:37.056 "dhgroup": "ffdhe6144" 00:17:37.056 } 00:17:37.056 } 00:17:37.056 ]' 00:17:37.056 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.315 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.315 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.315 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.315 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.315 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.315 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.315 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.572 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:37.572 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:38.140 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.140 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:38.140 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.140 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.140 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.140 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.140 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.140 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.140 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.399 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.658 00:17:38.917 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.917 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.917 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.917 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.917 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.917 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.917 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.917 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.917 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.917 { 00:17:38.917 "cntlid": 137, 00:17:38.917 "qid": 0, 00:17:38.917 "state": "enabled", 00:17:38.917 "thread": "nvmf_tgt_poll_group_000", 00:17:38.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:38.917 "listen_address": { 00:17:38.917 "trtype": "TCP", 00:17:38.917 "adrfam": "IPv4", 00:17:38.917 "traddr": "10.0.0.2", 00:17:38.917 "trsvcid": "4420" 00:17:38.917 }, 00:17:38.917 "peer_address": { 00:17:38.917 "trtype": "TCP", 00:17:38.917 "adrfam": "IPv4", 00:17:38.917 "traddr": "10.0.0.1", 00:17:38.917 "trsvcid": "54962" 00:17:38.917 }, 00:17:38.917 "auth": { 00:17:38.917 "state": "completed", 00:17:38.917 "digest": "sha512", 00:17:38.917 "dhgroup": "ffdhe8192" 00:17:38.917 } 00:17:38.917 } 00:17:38.917 ]' 00:17:38.917 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.176 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.176 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.176 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.176 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.176 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.176 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.176 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.445 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:39.445 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.022 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.589 00:17:40.589 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.589 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.589 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.848 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.848 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.848 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.848 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.848 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.848 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.848 { 00:17:40.848 "cntlid": 139, 00:17:40.848 "qid": 0, 00:17:40.848 "state": "enabled", 00:17:40.848 "thread": "nvmf_tgt_poll_group_000", 00:17:40.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:40.848 "listen_address": { 00:17:40.848 "trtype": "TCP", 00:17:40.848 "adrfam": "IPv4", 00:17:40.848 "traddr": "10.0.0.2", 00:17:40.848 "trsvcid": "4420" 00:17:40.848 }, 00:17:40.848 "peer_address": { 00:17:40.848 "trtype": "TCP", 00:17:40.848 "adrfam": "IPv4", 00:17:40.848 "traddr": "10.0.0.1", 00:17:40.848 "trsvcid": "54992" 00:17:40.848 }, 00:17:40.848 "auth": { 00:17:40.848 "state": "completed", 00:17:40.848 "digest": "sha512", 00:17:40.848 "dhgroup": "ffdhe8192" 00:17:40.848 } 00:17:40.848 } 00:17:40.848 ]' 00:17:40.848 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.848 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.848 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.848 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.848 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.848 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.848 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.848 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.107 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:41.107 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: --dhchap-ctrl-secret DHHC-1:02:MTlkOGE1NDU1MjgxNjljYTdmYTkyODE2NTI2NmJjNTFmMWI1NTUzZjViMTk5YTZk4sKOZw==: 00:17:41.674 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.674 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:41.674 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.674 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.674 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.674 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.674 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.674 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.932 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.499 00:17:42.499 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.499 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.499 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.499 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.758 { 00:17:42.758 "cntlid": 141, 00:17:42.758 "qid": 0, 00:17:42.758 "state": "enabled", 00:17:42.758 "thread": "nvmf_tgt_poll_group_000", 00:17:42.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:42.758 "listen_address": { 00:17:42.758 "trtype": "TCP", 00:17:42.758 "adrfam": "IPv4", 00:17:42.758 "traddr": "10.0.0.2", 00:17:42.758 "trsvcid": "4420" 00:17:42.758 }, 00:17:42.758 "peer_address": { 00:17:42.758 "trtype": "TCP", 00:17:42.758 "adrfam": "IPv4", 00:17:42.758 "traddr": "10.0.0.1", 00:17:42.758 "trsvcid": "55026" 00:17:42.758 }, 00:17:42.758 "auth": { 00:17:42.758 "state": "completed", 00:17:42.758 "digest": "sha512", 00:17:42.758 "dhgroup": "ffdhe8192" 00:17:42.758 } 00:17:42.758 } 00:17:42.758 ]' 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.758 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.017 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:43.017 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:01:Y2YzZDMzOTY4MTFkN2MwZjdjYTcyNzNmNTkyMDMwNWXzokMh: 00:17:43.584 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.584 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:43.584 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.584 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.584 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.584 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.584 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.584 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.843 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.102 00:17:44.360 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.360 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.360 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.360 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.360 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.360 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.360 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.360 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.360 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.360 { 00:17:44.360 "cntlid": 143, 00:17:44.360 "qid": 0, 00:17:44.360 "state": "enabled", 00:17:44.360 "thread": "nvmf_tgt_poll_group_000", 00:17:44.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:44.361 "listen_address": { 00:17:44.361 "trtype": "TCP", 00:17:44.361 "adrfam": "IPv4", 00:17:44.361 "traddr": "10.0.0.2", 00:17:44.361 "trsvcid": "4420" 00:17:44.361 }, 00:17:44.361 "peer_address": { 00:17:44.361 "trtype": "TCP", 00:17:44.361 "adrfam": "IPv4", 00:17:44.361 "traddr": "10.0.0.1", 00:17:44.361 "trsvcid": "55054" 00:17:44.361 }, 00:17:44.361 "auth": { 00:17:44.361 "state": "completed", 00:17:44.361 "digest": "sha512", 00:17:44.361 "dhgroup": "ffdhe8192" 00:17:44.361 } 00:17:44.361 } 00:17:44.361 ]' 00:17:44.361 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.619 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.619 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.619 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.619 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.619 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.619 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.619 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.878 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:44.878 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.445 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:45.446 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.446 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.446 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.446 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.446 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.446 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.446 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.446 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.704 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.704 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.704 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.704 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.963 00:17:45.963 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.963 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.963 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.223 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.223 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.223 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.223 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.223 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.223 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.223 { 00:17:46.223 "cntlid": 145, 00:17:46.223 "qid": 0, 00:17:46.223 "state": "enabled", 00:17:46.223 "thread": "nvmf_tgt_poll_group_000", 00:17:46.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:46.223 "listen_address": { 00:17:46.223 "trtype": "TCP", 00:17:46.223 "adrfam": "IPv4", 00:17:46.223 "traddr": "10.0.0.2", 00:17:46.223 "trsvcid": "4420" 00:17:46.223 }, 00:17:46.223 "peer_address": { 00:17:46.223 "trtype": "TCP", 00:17:46.223 "adrfam": "IPv4", 00:17:46.223 "traddr": "10.0.0.1", 00:17:46.223 "trsvcid": "55092" 00:17:46.223 }, 00:17:46.223 "auth": { 00:17:46.223 "state": "completed", 00:17:46.223 "digest": "sha512", 00:17:46.223 "dhgroup": "ffdhe8192" 00:17:46.223 } 00:17:46.223 } 00:17:46.223 ]' 00:17:46.223 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.223 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.223 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.223 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.223 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.481 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.481 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.481 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.481 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:46.481 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTYzODA2MmJmYzJlYTFlZGYwZWNjMDc4MTI1MTdjNGZiYmU0YWEwMDE1YWE1OGM1FP1sGg==: --dhchap-ctrl-secret DHHC-1:03:ZTg5NzUyOTQ2YWFhYzYxZWE4OGQyZjhiNjE1MWZlZDIwZGYwZmZkMjFhNGE1Y2JiOWY1MWM1YTQ4MWJmODQ4ZErLk/o=: 00:17:47.049 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.049 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:47.049 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.049 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:47.308 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:47.567 request: 00:17:47.567 { 00:17:47.567 "name": "nvme0", 00:17:47.567 "trtype": "tcp", 00:17:47.567 "traddr": "10.0.0.2", 00:17:47.567 "adrfam": "ipv4", 00:17:47.567 "trsvcid": "4420", 00:17:47.567 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:47.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:47.567 "prchk_reftag": false, 00:17:47.567 "prchk_guard": false, 00:17:47.567 "hdgst": false, 00:17:47.567 "ddgst": false, 00:17:47.567 "dhchap_key": "key2", 00:17:47.567 "allow_unrecognized_csi": false, 00:17:47.567 "method": "bdev_nvme_attach_controller", 00:17:47.567 "req_id": 1 00:17:47.567 } 00:17:47.567 Got JSON-RPC error response 00:17:47.567 response: 00:17:47.567 { 00:17:47.567 "code": -5, 00:17:47.567 "message": "Input/output error" 00:17:47.567 } 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.567 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:48.135 request: 00:17:48.135 { 00:17:48.135 "name": "nvme0", 00:17:48.135 "trtype": "tcp", 00:17:48.135 "traddr": "10.0.0.2", 00:17:48.135 "adrfam": "ipv4", 00:17:48.135 "trsvcid": "4420", 00:17:48.135 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:48.135 "prchk_reftag": false, 00:17:48.135 "prchk_guard": false, 00:17:48.135 "hdgst": false, 00:17:48.135 "ddgst": false, 00:17:48.135 "dhchap_key": "key1", 00:17:48.135 "dhchap_ctrlr_key": "ckey2", 00:17:48.135 "allow_unrecognized_csi": false, 00:17:48.135 "method": "bdev_nvme_attach_controller", 00:17:48.135 "req_id": 1 00:17:48.135 } 00:17:48.135 Got JSON-RPC error response 00:17:48.135 response: 00:17:48.135 { 00:17:48.135 "code": -5, 00:17:48.135 "message": "Input/output error" 00:17:48.135 } 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.135 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.703 request: 00:17:48.703 { 00:17:48.703 "name": "nvme0", 00:17:48.703 "trtype": "tcp", 00:17:48.703 "traddr": "10.0.0.2", 00:17:48.703 "adrfam": "ipv4", 00:17:48.703 "trsvcid": "4420", 00:17:48.703 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:48.703 "prchk_reftag": false, 00:17:48.703 "prchk_guard": false, 00:17:48.703 "hdgst": false, 00:17:48.703 "ddgst": false, 00:17:48.703 "dhchap_key": "key1", 00:17:48.703 "dhchap_ctrlr_key": "ckey1", 00:17:48.703 "allow_unrecognized_csi": false, 00:17:48.703 "method": "bdev_nvme_attach_controller", 00:17:48.703 "req_id": 1 00:17:48.703 } 00:17:48.703 Got JSON-RPC error response 00:17:48.703 response: 00:17:48.703 { 00:17:48.703 "code": -5, 00:17:48.703 "message": "Input/output error" 00:17:48.703 } 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 38022 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 38022 ']' 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 38022 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 38022 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 38022' 00:17:48.703 killing process with pid 38022 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 38022 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 38022 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.703 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.962 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=59768 00:17:48.962 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:48.962 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 59768 00:17:48.962 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 59768 ']' 00:17:48.962 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.962 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.962 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.962 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.962 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 59768 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 59768 ']' 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.962 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.221 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.221 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:49.221 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:49.221 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.221 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.480 null0 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KdC 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.zS0 ]] 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zS0 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.480 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fBo 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.mt3 ]] 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mt3 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.g6R 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.vmJ ]] 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vmJ 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Qfm 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.481 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.417 nvme0n1 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.417 { 00:17:50.417 "cntlid": 1, 00:17:50.417 "qid": 0, 00:17:50.417 "state": "enabled", 00:17:50.417 "thread": "nvmf_tgt_poll_group_000", 00:17:50.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:50.417 "listen_address": { 00:17:50.417 "trtype": "TCP", 00:17:50.417 "adrfam": "IPv4", 00:17:50.417 "traddr": "10.0.0.2", 00:17:50.417 "trsvcid": "4420" 00:17:50.417 }, 00:17:50.417 "peer_address": { 00:17:50.417 "trtype": "TCP", 00:17:50.417 "adrfam": "IPv4", 00:17:50.417 "traddr": "10.0.0.1", 00:17:50.417 "trsvcid": "42632" 00:17:50.417 }, 00:17:50.417 "auth": { 00:17:50.417 "state": "completed", 00:17:50.417 "digest": "sha512", 00:17:50.417 "dhgroup": "ffdhe8192" 00:17:50.417 } 00:17:50.417 } 00:17:50.417 ]' 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.417 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.418 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.676 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:50.676 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:51.243 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.243 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:51.243 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.243 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.243 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:51.502 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.503 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.503 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.503 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.762 request: 00:17:51.762 { 00:17:51.762 "name": "nvme0", 00:17:51.762 "trtype": "tcp", 00:17:51.762 "traddr": "10.0.0.2", 00:17:51.762 "adrfam": "ipv4", 00:17:51.762 "trsvcid": "4420", 00:17:51.762 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:51.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:51.762 "prchk_reftag": false, 00:17:51.762 "prchk_guard": false, 00:17:51.762 "hdgst": false, 00:17:51.762 "ddgst": false, 00:17:51.762 "dhchap_key": "key3", 00:17:51.762 "allow_unrecognized_csi": false, 00:17:51.762 "method": "bdev_nvme_attach_controller", 00:17:51.762 "req_id": 1 00:17:51.762 } 00:17:51.762 Got JSON-RPC error response 00:17:51.762 response: 00:17:51.762 { 00:17:51.762 "code": -5, 00:17:51.762 "message": "Input/output error" 00:17:51.762 } 00:17:51.762 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:51.762 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.762 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.762 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.762 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:51.762 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:51.762 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:51.762 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:52.021 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:52.021 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:52.021 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:52.021 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:52.021 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.021 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:52.021 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.021 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:52.021 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.021 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.280 request: 00:17:52.280 { 00:17:52.280 "name": "nvme0", 00:17:52.280 "trtype": "tcp", 00:17:52.280 "traddr": "10.0.0.2", 00:17:52.280 "adrfam": "ipv4", 00:17:52.280 "trsvcid": "4420", 00:17:52.280 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:52.280 "prchk_reftag": false, 00:17:52.280 "prchk_guard": false, 00:17:52.280 "hdgst": false, 00:17:52.280 "ddgst": false, 00:17:52.280 "dhchap_key": "key3", 00:17:52.280 "allow_unrecognized_csi": false, 00:17:52.280 "method": "bdev_nvme_attach_controller", 00:17:52.280 "req_id": 1 00:17:52.280 } 00:17:52.280 Got JSON-RPC error response 00:17:52.280 response: 00:17:52.280 { 00:17:52.280 "code": -5, 00:17:52.280 "message": "Input/output error" 00:17:52.280 } 00:17:52.280 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:52.280 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.280 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.280 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.280 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:52.280 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:52.280 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:52.280 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:52.280 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:52.280 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.539 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.798 request: 00:17:52.798 { 00:17:52.798 "name": "nvme0", 00:17:52.798 "trtype": "tcp", 00:17:52.798 "traddr": "10.0.0.2", 00:17:52.798 "adrfam": "ipv4", 00:17:52.798 "trsvcid": "4420", 00:17:52.798 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:52.798 "prchk_reftag": false, 00:17:52.798 "prchk_guard": false, 00:17:52.798 "hdgst": false, 00:17:52.798 "ddgst": false, 00:17:52.798 "dhchap_key": "key0", 00:17:52.798 "dhchap_ctrlr_key": "key1", 00:17:52.798 "allow_unrecognized_csi": false, 00:17:52.798 "method": "bdev_nvme_attach_controller", 00:17:52.798 "req_id": 1 00:17:52.798 } 00:17:52.798 Got JSON-RPC error response 00:17:52.798 response: 00:17:52.798 { 00:17:52.798 "code": -5, 00:17:52.798 "message": "Input/output error" 00:17:52.798 } 00:17:52.798 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:52.798 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.798 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.798 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.798 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:52.798 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:52.799 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:53.058 nvme0n1 00:17:53.058 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:53.058 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.058 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:53.316 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.316 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.316 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.316 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:53.316 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.317 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.576 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.576 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:53.576 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:53.576 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:54.143 nvme0n1 00:17:54.143 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:54.143 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.143 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:54.402 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.402 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.402 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.402 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.402 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.402 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:54.402 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:54.402 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.661 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.661 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:54.661 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: --dhchap-ctrl-secret DHHC-1:03:Yzg2MTcxMjRkZjhmZWEwY2NiY2UyMDk4Y2FjNGZiZGY0YjllZjhhNzI5M2RiYjhiNGUwZGI5Nzc3YzQwMTYxNQltDgk=: 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:55.229 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:55.797 request: 00:17:55.797 { 00:17:55.797 "name": "nvme0", 00:17:55.797 "trtype": "tcp", 00:17:55.797 "traddr": "10.0.0.2", 00:17:55.797 "adrfam": "ipv4", 00:17:55.797 "trsvcid": "4420", 00:17:55.797 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:55.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:55.797 "prchk_reftag": false, 00:17:55.797 "prchk_guard": false, 00:17:55.797 "hdgst": false, 00:17:55.797 "ddgst": false, 00:17:55.797 "dhchap_key": "key1", 00:17:55.797 "allow_unrecognized_csi": false, 00:17:55.797 "method": "bdev_nvme_attach_controller", 00:17:55.797 "req_id": 1 00:17:55.797 } 00:17:55.797 Got JSON-RPC error response 00:17:55.797 response: 00:17:55.797 { 00:17:55.797 "code": -5, 00:17:55.797 "message": "Input/output error" 00:17:55.797 } 00:17:55.797 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:55.797 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.797 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.797 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.797 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:55.797 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:55.797 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:56.733 nvme0n1 00:17:56.733 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:56.733 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:56.733 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.733 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.733 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.733 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.992 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:56.992 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.992 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.992 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.992 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:56.992 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:56.992 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:57.250 nvme0n1 00:17:57.250 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:57.250 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:57.250 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: '' 2s 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: ]] 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MDJhMWJiNTU5YzdjYzQ5NzNmM2IxMjE0MWIwMjk0MjDh8nkt: 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:57.509 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: 2s 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: ]] 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZWEwNjRiMzIyMGQ2ZGM1YTJhYzhkYzFhMWQ3M2M3YzNkODE1MWVkMjY5MDE4ZTIxfx7TvA==: 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:00.042 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.946 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:02.515 nvme0n1 00:18:02.515 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:02.515 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.515 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.515 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.515 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:02.515 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.082 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:03.082 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:03.082 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.082 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.082 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:03.082 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.082 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.082 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.082 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:03.082 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.434 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.750 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:04.009 request: 00:18:04.009 { 00:18:04.009 "name": "nvme0", 00:18:04.009 "dhchap_key": "key1", 00:18:04.009 "dhchap_ctrlr_key": "key3", 00:18:04.009 "method": "bdev_nvme_set_keys", 00:18:04.009 "req_id": 1 00:18:04.009 } 00:18:04.009 Got JSON-RPC error response 00:18:04.009 response: 00:18:04.009 { 00:18:04.009 "code": -13, 00:18:04.009 "message": "Permission denied" 00:18:04.009 } 00:18:04.009 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:04.009 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.009 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.009 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.009 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:04.009 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:04.009 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.268 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:04.268 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:05.203 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:05.203 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:05.203 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.462 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:05.462 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:05.462 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.462 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.462 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.462 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:05.462 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:05.462 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:06.028 nvme0n1 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.285 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.544 request: 00:18:06.544 { 00:18:06.544 "name": "nvme0", 00:18:06.544 "dhchap_key": "key2", 00:18:06.544 "dhchap_ctrlr_key": "key0", 00:18:06.544 "method": "bdev_nvme_set_keys", 00:18:06.544 "req_id": 1 00:18:06.544 } 00:18:06.544 Got JSON-RPC error response 00:18:06.544 response: 00:18:06.544 { 00:18:06.544 "code": -13, 00:18:06.544 "message": "Permission denied" 00:18:06.544 } 00:18:06.544 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:06.544 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:06.544 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:06.544 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:06.544 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:06.544 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:06.544 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.803 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:06.803 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:07.739 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:07.739 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:07.739 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 38103 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 38103 ']' 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 38103 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 38103 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 38103' 00:18:07.999 killing process with pid 38103 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 38103 00:18:07.999 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 38103 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:08.567 rmmod nvme_tcp 00:18:08.567 rmmod nvme_fabrics 00:18:08.567 rmmod nvme_keyring 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 59768 ']' 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 59768 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 59768 ']' 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 59768 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59768 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59768' 00:18:08.567 killing process with pid 59768 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 59768 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 59768 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:08.567 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:08.826 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:08.826 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:08.826 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.826 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.826 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.731 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:10.731 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.KdC /tmp/spdk.key-sha256.fBo /tmp/spdk.key-sha384.g6R /tmp/spdk.key-sha512.Qfm /tmp/spdk.key-sha512.zS0 /tmp/spdk.key-sha384.mt3 /tmp/spdk.key-sha256.vmJ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:10.731 00:18:10.731 real 2m31.808s 00:18:10.731 user 5m49.877s 00:18:10.731 sys 0m24.293s 00:18:10.731 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.731 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.731 ************************************ 00:18:10.731 END TEST nvmf_auth_target 00:18:10.731 ************************************ 00:18:10.731 04:05:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:10.731 04:05:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:10.731 04:05:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:10.731 04:05:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.731 04:05:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.731 ************************************ 00:18:10.731 START TEST nvmf_bdevio_no_huge 00:18:10.731 ************************************ 00:18:10.731 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:10.991 * Looking for test storage... 00:18:10.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:10.991 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:10.991 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:10.991 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:10.991 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:10.991 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.991 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.991 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:10.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.992 --rc genhtml_branch_coverage=1 00:18:10.992 --rc genhtml_function_coverage=1 00:18:10.992 --rc genhtml_legend=1 00:18:10.992 --rc geninfo_all_blocks=1 00:18:10.992 --rc geninfo_unexecuted_blocks=1 00:18:10.992 00:18:10.992 ' 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:10.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.992 --rc genhtml_branch_coverage=1 00:18:10.992 --rc genhtml_function_coverage=1 00:18:10.992 --rc genhtml_legend=1 00:18:10.992 --rc geninfo_all_blocks=1 00:18:10.992 --rc geninfo_unexecuted_blocks=1 00:18:10.992 00:18:10.992 ' 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:10.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.992 --rc genhtml_branch_coverage=1 00:18:10.992 --rc genhtml_function_coverage=1 00:18:10.992 --rc genhtml_legend=1 00:18:10.992 --rc geninfo_all_blocks=1 00:18:10.992 --rc geninfo_unexecuted_blocks=1 00:18:10.992 00:18:10.992 ' 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:10.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.992 --rc genhtml_branch_coverage=1 00:18:10.992 --rc genhtml_function_coverage=1 00:18:10.992 --rc genhtml_legend=1 00:18:10.992 --rc geninfo_all_blocks=1 00:18:10.992 --rc geninfo_unexecuted_blocks=1 00:18:10.992 00:18:10.992 ' 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:10.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:10.992 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.993 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.993 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.993 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:10.993 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:10.993 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:10.993 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:17.562 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:17.562 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.562 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:17.563 Found net devices under 0000:af:00.0: cvl_0_0 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:17.563 Found net devices under 0000:af:00.1: cvl_0_1 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:17.563 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:17.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:18:17.563 00:18:17.563 --- 10.0.0.2 ping statistics --- 00:18:17.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.563 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:18:17.563 00:18:17.563 --- 10.0.0.1 ping statistics --- 00:18:17.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.563 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=66506 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 66506 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 66506 ']' 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.563 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.563 [2024-12-10 04:05:16.168227] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:17.563 [2024-12-10 04:05:16.168277] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:17.563 [2024-12-10 04:05:16.252067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.563 [2024-12-10 04:05:16.298278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.563 [2024-12-10 04:05:16.298312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.563 [2024-12-10 04:05:16.298319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.563 [2024-12-10 04:05:16.298327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.563 [2024-12-10 04:05:16.298332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.563 [2024-12-10 04:05:16.299409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:17.563 [2024-12-10 04:05:16.299523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:17.563 [2024-12-10 04:05:16.299607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.563 [2024-12-10 04:05:16.299609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.822 [2024-12-10 04:05:17.068097] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.822 Malloc0 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.822 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.080 [2024-12-10 04:05:17.112380] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:18.080 { 00:18:18.080 "params": { 00:18:18.080 "name": "Nvme$subsystem", 00:18:18.080 "trtype": "$TEST_TRANSPORT", 00:18:18.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.080 "adrfam": "ipv4", 00:18:18.080 "trsvcid": "$NVMF_PORT", 00:18:18.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.080 "hdgst": ${hdgst:-false}, 00:18:18.080 "ddgst": ${ddgst:-false} 00:18:18.080 }, 00:18:18.080 "method": "bdev_nvme_attach_controller" 00:18:18.080 } 00:18:18.080 EOF 00:18:18.080 )") 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:18.080 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:18.080 "params": { 00:18:18.080 "name": "Nvme1", 00:18:18.080 "trtype": "tcp", 00:18:18.081 "traddr": "10.0.0.2", 00:18:18.081 "adrfam": "ipv4", 00:18:18.081 "trsvcid": "4420", 00:18:18.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.081 "hdgst": false, 00:18:18.081 "ddgst": false 00:18:18.081 }, 00:18:18.081 "method": "bdev_nvme_attach_controller" 00:18:18.081 }' 00:18:18.081 [2024-12-10 04:05:17.161304] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:18.081 [2024-12-10 04:05:17.161352] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid66749 ] 00:18:18.081 [2024-12-10 04:05:17.241348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:18.081 [2024-12-10 04:05:17.289222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.081 [2024-12-10 04:05:17.289328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.081 [2024-12-10 04:05:17.289327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.339 I/O targets: 00:18:18.339 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:18.339 00:18:18.339 00:18:18.339 CUnit - A unit testing framework for C - Version 2.1-3 00:18:18.339 http://cunit.sourceforge.net/ 00:18:18.339 00:18:18.339 00:18:18.339 Suite: bdevio tests on: Nvme1n1 00:18:18.597 Test: blockdev write read block ...passed 00:18:18.597 Test: blockdev write zeroes read block ...passed 00:18:18.597 Test: blockdev write zeroes read no split ...passed 00:18:18.597 Test: blockdev write zeroes read split ...passed 00:18:18.597 Test: blockdev write zeroes read split partial ...passed 00:18:18.597 Test: blockdev reset ...[2024-12-10 04:05:17.735037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:18.597 [2024-12-10 04:05:17.735106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc03d30 (9): Bad file descriptor 00:18:18.597 [2024-12-10 04:05:17.750641] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:18.597 passed 00:18:18.597 Test: blockdev write read 8 blocks ...passed 00:18:18.597 Test: blockdev write read size > 128k ...passed 00:18:18.597 Test: blockdev write read invalid size ...passed 00:18:18.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:18.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:18.597 Test: blockdev write read max offset ...passed 00:18:18.855 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:18.855 Test: blockdev writev readv 8 blocks ...passed 00:18:18.855 Test: blockdev writev readv 30 x 1block ...passed 00:18:18.855 Test: blockdev writev readv block ...passed 00:18:18.855 Test: blockdev writev readv size > 128k ...passed 00:18:18.855 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:18.855 Test: blockdev comparev and writev ...[2024-12-10 04:05:18.001965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.855 [2024-12-10 04:05:18.001997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.855 [2024-12-10 04:05:18.002012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.855 [2024-12-10 04:05:18.002020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:18.855 [2024-12-10 04:05:18.002254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.855 [2024-12-10 04:05:18.002265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:18.855 [2024-12-10 04:05:18.002276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.855 [2024-12-10 04:05:18.002283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:18.855 [2024-12-10 04:05:18.002511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.855 [2024-12-10 04:05:18.002521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:18.855 [2024-12-10 04:05:18.002533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.855 [2024-12-10 04:05:18.002541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:18.855 [2024-12-10 04:05:18.002763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.855 [2024-12-10 04:05:18.002773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:18.855 [2024-12-10 04:05:18.002786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.855 [2024-12-10 04:05:18.002793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:18.855 passed 00:18:18.855 Test: blockdev nvme passthru rw ...passed 00:18:18.855 Test: blockdev nvme passthru vendor specific ...[2024-12-10 04:05:18.084432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:18.855 [2024-12-10 04:05:18.084452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:18.855 [2024-12-10 04:05:18.084560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:18.855 [2024-12-10 04:05:18.084570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:18.855 [2024-12-10 04:05:18.084664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:18.855 [2024-12-10 04:05:18.084674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:18.855 [2024-12-10 04:05:18.084775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:18.855 [2024-12-10 04:05:18.084785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:18.855 passed 00:18:18.855 Test: blockdev nvme admin passthru ...passed 00:18:18.855 Test: blockdev copy ...passed 00:18:18.855 00:18:18.855 Run Summary: Type Total Ran Passed Failed Inactive 00:18:18.855 suites 1 1 n/a 0 0 00:18:18.855 tests 23 23 23 0 0 00:18:18.855 asserts 152 152 152 0 n/a 00:18:18.855 00:18:18.855 Elapsed time = 1.058 seconds 00:18:19.113 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.114 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.114 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:19.372 rmmod nvme_tcp 00:18:19.372 rmmod nvme_fabrics 00:18:19.372 rmmod nvme_keyring 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 66506 ']' 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 66506 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 66506 ']' 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 66506 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66506 00:18:19.372 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:19.373 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:19.373 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66506' 00:18:19.373 killing process with pid 66506 00:18:19.373 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 66506 00:18:19.373 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 66506 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.632 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.168 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:22.168 00:18:22.168 real 0m10.883s 00:18:22.168 user 0m13.965s 00:18:22.168 sys 0m5.408s 00:18:22.168 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.168 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.168 ************************************ 00:18:22.168 END TEST nvmf_bdevio_no_huge 00:18:22.168 ************************************ 00:18:22.168 04:05:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:22.168 04:05:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:22.168 04:05:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.168 04:05:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:22.168 ************************************ 00:18:22.168 START TEST nvmf_tls 00:18:22.168 ************************************ 00:18:22.168 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:22.168 * Looking for test storage... 00:18:22.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:22.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.168 --rc genhtml_branch_coverage=1 00:18:22.168 --rc genhtml_function_coverage=1 00:18:22.168 --rc genhtml_legend=1 00:18:22.168 --rc geninfo_all_blocks=1 00:18:22.168 --rc geninfo_unexecuted_blocks=1 00:18:22.168 00:18:22.168 ' 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:22.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.168 --rc genhtml_branch_coverage=1 00:18:22.168 --rc genhtml_function_coverage=1 00:18:22.168 --rc genhtml_legend=1 00:18:22.168 --rc geninfo_all_blocks=1 00:18:22.168 --rc geninfo_unexecuted_blocks=1 00:18:22.168 00:18:22.168 ' 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:22.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.168 --rc genhtml_branch_coverage=1 00:18:22.168 --rc genhtml_function_coverage=1 00:18:22.168 --rc genhtml_legend=1 00:18:22.168 --rc geninfo_all_blocks=1 00:18:22.168 --rc geninfo_unexecuted_blocks=1 00:18:22.168 00:18:22.168 ' 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:22.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.168 --rc genhtml_branch_coverage=1 00:18:22.168 --rc genhtml_function_coverage=1 00:18:22.168 --rc genhtml_legend=1 00:18:22.168 --rc geninfo_all_blocks=1 00:18:22.168 --rc geninfo_unexecuted_blocks=1 00:18:22.168 00:18:22.168 ' 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.168 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:22.169 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:28.739 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:28.739 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:28.739 Found net devices under 0000:af:00.0: cvl_0_0 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.739 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:28.740 Found net devices under 0000:af:00.1: cvl_0_1 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:28.740 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:28.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:18:28.740 00:18:28.740 --- 10.0.0.2 ping statistics --- 00:18:28.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.740 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:18:28.740 00:18:28.740 --- 10.0.0.1 ping statistics --- 00:18:28.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.740 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70447 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70447 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70447 ']' 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.740 [2024-12-10 04:05:27.148510] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:28.740 [2024-12-10 04:05:27.148555] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.740 [2024-12-10 04:05:27.225531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.740 [2024-12-10 04:05:27.264677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.740 [2024-12-10 04:05:27.264709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.740 [2024-12-10 04:05:27.264716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.740 [2024-12-10 04:05:27.264722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.740 [2024-12-10 04:05:27.264727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.740 [2024-12-10 04:05:27.265223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:28.740 true 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:28.740 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:28.999 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:28.999 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:28.999 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:29.258 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:29.258 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:29.258 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:29.258 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:29.258 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:29.258 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:29.517 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:29.517 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:29.517 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:29.775 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:29.775 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:29.775 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:29.775 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:29.775 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:30.034 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:30.034 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ZMi46lEd1u 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.5p41rZTTMi 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ZMi46lEd1u 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.5p41rZTTMi 00:18:30.293 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:30.552 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:30.810 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ZMi46lEd1u 00:18:30.810 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZMi46lEd1u 00:18:30.810 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:31.069 [2024-12-10 04:05:30.131555] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.069 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:31.069 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:31.328 [2024-12-10 04:05:30.508529] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.328 [2024-12-10 04:05:30.508733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.328 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:31.587 malloc0 00:18:31.587 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:31.846 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZMi46lEd1u 00:18:31.846 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:32.105 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZMi46lEd1u 00:18:42.081 Initializing NVMe Controllers 00:18:42.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:42.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:42.081 Initialization complete. Launching workers. 00:18:42.081 ======================================================== 00:18:42.081 Latency(us) 00:18:42.081 Device Information : IOPS MiB/s Average min max 00:18:42.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16906.26 66.04 3785.68 762.72 6291.72 00:18:42.081 ======================================================== 00:18:42.081 Total : 16906.26 66.04 3785.68 762.72 6291.72 00:18:42.081 00:18:42.081 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZMi46lEd1u 00:18:42.081 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:42.081 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZMi46lEd1u 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72737 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72737 /var/tmp/bdevperf.sock 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72737 ']' 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.082 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.341 [2024-12-10 04:05:41.386438] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:42.341 [2024-12-10 04:05:41.386483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72737 ] 00:18:42.341 [2024-12-10 04:05:41.459674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.341 [2024-12-10 04:05:41.498530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.341 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.341 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:42.341 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZMi46lEd1u 00:18:42.599 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:42.858 [2024-12-10 04:05:41.975074] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.858 TLSTESTn1 00:18:42.858 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:43.116 Running I/O for 10 seconds... 00:18:44.986 5414.00 IOPS, 21.15 MiB/s [2024-12-10T03:05:45.213Z] 5491.00 IOPS, 21.45 MiB/s [2024-12-10T03:05:46.215Z] 5532.33 IOPS, 21.61 MiB/s [2024-12-10T03:05:47.592Z] 5517.75 IOPS, 21.55 MiB/s [2024-12-10T03:05:48.528Z] 5537.80 IOPS, 21.63 MiB/s [2024-12-10T03:05:49.464Z] 5536.50 IOPS, 21.63 MiB/s [2024-12-10T03:05:50.400Z] 5415.71 IOPS, 21.16 MiB/s [2024-12-10T03:05:51.336Z] 5360.50 IOPS, 20.94 MiB/s [2024-12-10T03:05:52.272Z] 5283.78 IOPS, 20.64 MiB/s [2024-12-10T03:05:52.272Z] 5267.80 IOPS, 20.58 MiB/s 00:18:52.986 Latency(us) 00:18:52.986 [2024-12-10T03:05:52.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.986 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:52.986 Verification LBA range: start 0x0 length 0x2000 00:18:52.986 TLSTESTn1 : 10.02 5271.72 20.59 0.00 0.00 24245.12 4930.80 33704.23 00:18:52.986 [2024-12-10T03:05:52.272Z] =================================================================================================================== 00:18:52.986 [2024-12-10T03:05:52.272Z] Total : 5271.72 20.59 0.00 0.00 24245.12 4930.80 33704.23 00:18:52.986 { 00:18:52.986 "results": [ 00:18:52.986 { 00:18:52.986 "job": "TLSTESTn1", 00:18:52.986 "core_mask": "0x4", 00:18:52.986 "workload": "verify", 00:18:52.986 "status": "finished", 00:18:52.986 "verify_range": { 00:18:52.986 "start": 0, 00:18:52.986 "length": 8192 00:18:52.986 }, 00:18:52.986 "queue_depth": 128, 00:18:52.986 "io_size": 4096, 00:18:52.986 "runtime": 10.016662, 00:18:52.986 "iops": 5271.716266356996, 00:18:52.986 "mibps": 20.592641665457016, 00:18:52.986 "io_failed": 0, 00:18:52.986 "io_timeout": 0, 00:18:52.986 "avg_latency_us": 24245.118862445386, 00:18:52.986 "min_latency_us": 4930.80380952381, 00:18:52.986 "max_latency_us": 33704.22857142857 00:18:52.986 } 00:18:52.986 ], 00:18:52.986 "core_count": 1 00:18:52.986 } 00:18:52.986 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:52.986 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72737 00:18:52.986 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72737 ']' 00:18:52.986 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72737 00:18:52.986 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.986 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.986 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72737 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72737' 00:18:53.246 killing process with pid 72737 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72737 00:18:53.246 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.246 00:18:53.246 Latency(us) 00:18:53.246 [2024-12-10T03:05:52.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.246 [2024-12-10T03:05:52.532Z] =================================================================================================================== 00:18:53.246 [2024-12-10T03:05:52.532Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72737 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5p41rZTTMi 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5p41rZTTMi 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5p41rZTTMi 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5p41rZTTMi 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74524 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74524 /var/tmp/bdevperf.sock 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74524 ']' 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.246 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.246 [2024-12-10 04:05:52.496525] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:53.246 [2024-12-10 04:05:52.496576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74524 ] 00:18:53.505 [2024-12-10 04:05:52.568549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.505 [2024-12-10 04:05:52.604520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.505 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.505 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.505 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5p41rZTTMi 00:18:53.763 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.022 [2024-12-10 04:05:53.060772] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.022 [2024-12-10 04:05:53.065440] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:54.022 [2024-12-10 04:05:53.066083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfc410 (107): Transport endpoint is not connected 00:18:54.022 [2024-12-10 04:05:53.067076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfc410 (9): Bad file descriptor 00:18:54.022 [2024-12-10 04:05:53.068078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:54.022 [2024-12-10 04:05:53.068089] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:54.022 [2024-12-10 04:05:53.068097] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:54.022 [2024-12-10 04:05:53.068105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:54.022 request: 00:18:54.022 { 00:18:54.022 "name": "TLSTEST", 00:18:54.022 "trtype": "tcp", 00:18:54.022 "traddr": "10.0.0.2", 00:18:54.022 "adrfam": "ipv4", 00:18:54.022 "trsvcid": "4420", 00:18:54.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.022 "prchk_reftag": false, 00:18:54.022 "prchk_guard": false, 00:18:54.022 "hdgst": false, 00:18:54.022 "ddgst": false, 00:18:54.022 "psk": "key0", 00:18:54.022 "allow_unrecognized_csi": false, 00:18:54.022 "method": "bdev_nvme_attach_controller", 00:18:54.022 "req_id": 1 00:18:54.022 } 00:18:54.022 Got JSON-RPC error response 00:18:54.022 response: 00:18:54.022 { 00:18:54.022 "code": -5, 00:18:54.022 "message": "Input/output error" 00:18:54.022 } 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74524 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74524 ']' 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74524 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74524 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74524' 00:18:54.022 killing process with pid 74524 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74524 00:18:54.022 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.022 00:18:54.022 Latency(us) 00:18:54.022 [2024-12-10T03:05:53.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.022 [2024-12-10T03:05:53.308Z] =================================================================================================================== 00:18:54.022 [2024-12-10T03:05:53.308Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74524 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZMi46lEd1u 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZMi46lEd1u 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZMi46lEd1u 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:54.022 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZMi46lEd1u 00:18:54.023 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.023 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74654 00:18:54.023 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.023 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.023 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74654 /var/tmp/bdevperf.sock 00:18:54.023 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74654 ']' 00:18:54.023 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.023 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.023 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.023 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.023 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.281 [2024-12-10 04:05:53.348195] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:54.281 [2024-12-10 04:05:53.348247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74654 ] 00:18:54.281 [2024-12-10 04:05:53.423422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.281 [2024-12-10 04:05:53.461774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.281 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.281 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.281 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZMi46lEd1u 00:18:54.540 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:54.799 [2024-12-10 04:05:53.905345] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.799 [2024-12-10 04:05:53.916675] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:54.799 [2024-12-10 04:05:53.916697] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:54.799 [2024-12-10 04:05:53.916719] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:54.799 [2024-12-10 04:05:53.917708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f2410 (107): Transport endpoint is not connected 00:18:54.799 [2024-12-10 04:05:53.918703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f2410 (9): Bad file descriptor 00:18:54.799 [2024-12-10 04:05:53.919705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:54.799 [2024-12-10 04:05:53.919717] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:54.799 [2024-12-10 04:05:53.919724] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:54.799 [2024-12-10 04:05:53.919732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:54.799 request: 00:18:54.799 { 00:18:54.799 "name": "TLSTEST", 00:18:54.799 "trtype": "tcp", 00:18:54.799 "traddr": "10.0.0.2", 00:18:54.799 "adrfam": "ipv4", 00:18:54.799 "trsvcid": "4420", 00:18:54.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.799 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:54.799 "prchk_reftag": false, 00:18:54.799 "prchk_guard": false, 00:18:54.799 "hdgst": false, 00:18:54.799 "ddgst": false, 00:18:54.799 "psk": "key0", 00:18:54.799 "allow_unrecognized_csi": false, 00:18:54.799 "method": "bdev_nvme_attach_controller", 00:18:54.799 "req_id": 1 00:18:54.799 } 00:18:54.799 Got JSON-RPC error response 00:18:54.799 response: 00:18:54.799 { 00:18:54.799 "code": -5, 00:18:54.799 "message": "Input/output error" 00:18:54.799 } 00:18:54.799 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74654 00:18:54.799 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74654 ']' 00:18:54.799 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74654 00:18:54.799 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.799 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.799 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74654 00:18:54.799 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:54.799 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:54.799 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74654' 00:18:54.799 killing process with pid 74654 00:18:54.799 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74654 00:18:54.799 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.799 00:18:54.799 Latency(us) 00:18:54.799 [2024-12-10T03:05:54.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.799 [2024-12-10T03:05:54.085Z] =================================================================================================================== 00:18:54.799 [2024-12-10T03:05:54.085Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.799 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74654 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZMi46lEd1u 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZMi46lEd1u 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZMi46lEd1u 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZMi46lEd1u 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74766 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74766 /var/tmp/bdevperf.sock 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74766 ']' 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.059 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.059 [2024-12-10 04:05:54.192416] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:55.059 [2024-12-10 04:05:54.192464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74766 ] 00:18:55.059 [2024-12-10 04:05:54.265129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.059 [2024-12-10 04:05:54.301257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.318 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.318 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:55.318 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZMi46lEd1u 00:18:55.318 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:55.576 [2024-12-10 04:05:54.757338] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.576 [2024-12-10 04:05:54.768851] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:55.576 [2024-12-10 04:05:54.768872] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:55.576 [2024-12-10 04:05:54.768894] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:55.576 [2024-12-10 04:05:54.769647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1518410 (107): Transport endpoint is not connected 00:18:55.576 [2024-12-10 04:05:54.770641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1518410 (9): Bad file descriptor 00:18:55.576 [2024-12-10 04:05:54.771643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:55.576 [2024-12-10 04:05:54.771654] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:55.576 [2024-12-10 04:05:54.771662] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:55.576 [2024-12-10 04:05:54.771671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:55.576 request: 00:18:55.576 { 00:18:55.576 "name": "TLSTEST", 00:18:55.576 "trtype": "tcp", 00:18:55.576 "traddr": "10.0.0.2", 00:18:55.576 "adrfam": "ipv4", 00:18:55.576 "trsvcid": "4420", 00:18:55.576 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:55.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.576 "prchk_reftag": false, 00:18:55.576 "prchk_guard": false, 00:18:55.576 "hdgst": false, 00:18:55.576 "ddgst": false, 00:18:55.576 "psk": "key0", 00:18:55.576 "allow_unrecognized_csi": false, 00:18:55.576 "method": "bdev_nvme_attach_controller", 00:18:55.576 "req_id": 1 00:18:55.576 } 00:18:55.576 Got JSON-RPC error response 00:18:55.576 response: 00:18:55.576 { 00:18:55.576 "code": -5, 00:18:55.576 "message": "Input/output error" 00:18:55.576 } 00:18:55.576 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74766 00:18:55.576 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74766 ']' 00:18:55.576 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74766 00:18:55.576 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.576 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.576 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74766 00:18:55.576 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:55.576 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:55.576 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74766' 00:18:55.576 killing process with pid 74766 00:18:55.576 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74766 00:18:55.576 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.576 00:18:55.576 Latency(us) 00:18:55.576 [2024-12-10T03:05:54.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.576 [2024-12-10T03:05:54.862Z] =================================================================================================================== 00:18:55.576 [2024-12-10T03:05:54.862Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.576 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74766 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:55.836 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74994 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74994 /var/tmp/bdevperf.sock 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74994 ']' 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.836 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.836 [2024-12-10 04:05:55.051005] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:55.836 [2024-12-10 04:05:55.051053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74994 ] 00:18:56.093 [2024-12-10 04:05:55.123558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.093 [2024-12-10 04:05:55.159668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.093 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.093 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:56.093 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:56.351 [2024-12-10 04:05:55.414541] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:56.351 [2024-12-10 04:05:55.414574] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:56.351 request: 00:18:56.351 { 00:18:56.351 "name": "key0", 00:18:56.351 "path": "", 00:18:56.351 "method": "keyring_file_add_key", 00:18:56.351 "req_id": 1 00:18:56.351 } 00:18:56.351 Got JSON-RPC error response 00:18:56.351 response: 00:18:56.351 { 00:18:56.351 "code": -1, 00:18:56.351 "message": "Operation not permitted" 00:18:56.351 } 00:18:56.351 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:56.351 [2024-12-10 04:05:55.607129] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.351 [2024-12-10 04:05:55.607164] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:56.351 request: 00:18:56.351 { 00:18:56.351 "name": "TLSTEST", 00:18:56.351 "trtype": "tcp", 00:18:56.351 "traddr": "10.0.0.2", 00:18:56.351 "adrfam": "ipv4", 00:18:56.351 "trsvcid": "4420", 00:18:56.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.351 "prchk_reftag": false, 00:18:56.351 "prchk_guard": false, 00:18:56.351 "hdgst": false, 00:18:56.351 "ddgst": false, 00:18:56.351 "psk": "key0", 00:18:56.351 "allow_unrecognized_csi": false, 00:18:56.351 "method": "bdev_nvme_attach_controller", 00:18:56.351 "req_id": 1 00:18:56.351 } 00:18:56.351 Got JSON-RPC error response 00:18:56.351 response: 00:18:56.351 { 00:18:56.351 "code": -126, 00:18:56.351 "message": "Required key not available" 00:18:56.351 } 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74994 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74994 ']' 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74994 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74994 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74994' 00:18:56.610 killing process with pid 74994 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74994 00:18:56.610 Received shutdown signal, test time was about 10.000000 seconds 00:18:56.610 00:18:56.610 Latency(us) 00:18:56.610 [2024-12-10T03:05:55.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.610 [2024-12-10T03:05:55.896Z] =================================================================================================================== 00:18:56.610 [2024-12-10T03:05:55.896Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74994 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 70447 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70447 ']' 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70447 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70447 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70447' 00:18:56.610 killing process with pid 70447 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70447 00:18:56.610 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70447 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.29uvJDZwgq 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.29uvJDZwgq 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75158 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75158 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75158 ']' 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.869 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.128 [2024-12-10 04:05:56.158954] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:57.128 [2024-12-10 04:05:56.159001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.128 [2024-12-10 04:05:56.236192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.128 [2024-12-10 04:05:56.274537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.128 [2024-12-10 04:05:56.274573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.128 [2024-12-10 04:05:56.274580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.128 [2024-12-10 04:05:56.274586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.128 [2024-12-10 04:05:56.274591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.128 [2024-12-10 04:05:56.275067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.128 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.128 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:57.128 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:57.128 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.128 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.386 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.386 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.29uvJDZwgq 00:18:57.386 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.29uvJDZwgq 00:18:57.386 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:57.386 [2024-12-10 04:05:56.582982] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.386 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:57.644 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:57.903 [2024-12-10 04:05:56.975994] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.903 [2024-12-10 04:05:56.976215] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.903 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:57.903 malloc0 00:18:57.903 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:58.162 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.29uvJDZwgq 00:18:58.441 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.29uvJDZwgq 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.29uvJDZwgq 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75488 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75488 /var/tmp/bdevperf.sock 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75488 ']' 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.806 [2024-12-10 04:05:57.801740] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:58.806 [2024-12-10 04:05:57.801790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75488 ] 00:18:58.806 [2024-12-10 04:05:57.872302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.806 [2024-12-10 04:05:57.912738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.806 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.806 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:58.806 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.29uvJDZwgq 00:18:59.065 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:59.324 [2024-12-10 04:05:58.361356] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.324 TLSTESTn1 00:18:59.324 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:59.324 Running I/O for 10 seconds... 00:19:01.635 5255.00 IOPS, 20.53 MiB/s [2024-12-10T03:06:01.856Z] 5336.00 IOPS, 20.84 MiB/s [2024-12-10T03:06:02.791Z] 5363.00 IOPS, 20.95 MiB/s [2024-12-10T03:06:03.726Z] 5434.25 IOPS, 21.23 MiB/s [2024-12-10T03:06:04.662Z] 5472.60 IOPS, 21.38 MiB/s [2024-12-10T03:06:05.598Z] 5505.33 IOPS, 21.51 MiB/s [2024-12-10T03:06:06.973Z] 5516.71 IOPS, 21.55 MiB/s [2024-12-10T03:06:07.909Z] 5485.88 IOPS, 21.43 MiB/s [2024-12-10T03:06:08.846Z] 5481.33 IOPS, 21.41 MiB/s [2024-12-10T03:06:08.846Z] 5492.10 IOPS, 21.45 MiB/s 00:19:09.560 Latency(us) 00:19:09.560 [2024-12-10T03:06:08.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.560 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:09.560 Verification LBA range: start 0x0 length 0x2000 00:19:09.560 TLSTESTn1 : 10.03 5487.63 21.44 0.00 0.00 23276.38 4649.94 30208.98 00:19:09.560 [2024-12-10T03:06:08.846Z] =================================================================================================================== 00:19:09.560 [2024-12-10T03:06:08.846Z] Total : 5487.63 21.44 0.00 0.00 23276.38 4649.94 30208.98 00:19:09.560 { 00:19:09.560 "results": [ 00:19:09.560 { 00:19:09.560 "job": "TLSTESTn1", 00:19:09.560 "core_mask": "0x4", 00:19:09.560 "workload": "verify", 00:19:09.560 "status": "finished", 00:19:09.560 "verify_range": { 00:19:09.560 "start": 0, 00:19:09.560 "length": 8192 00:19:09.560 }, 00:19:09.560 "queue_depth": 128, 00:19:09.560 "io_size": 4096, 00:19:09.560 "runtime": 10.031294, 00:19:09.560 "iops": 5487.627020003601, 00:19:09.560 "mibps": 21.436043046889065, 00:19:09.560 "io_failed": 0, 00:19:09.560 "io_timeout": 0, 00:19:09.560 "avg_latency_us": 23276.375973038248, 00:19:09.560 "min_latency_us": 4649.935238095238, 00:19:09.560 "max_latency_us": 30208.975238095238 00:19:09.560 } 00:19:09.560 ], 00:19:09.560 "core_count": 1 00:19:09.560 } 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 75488 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75488 ']' 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75488 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75488 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75488' 00:19:09.560 killing process with pid 75488 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75488 00:19:09.560 Received shutdown signal, test time was about 10.000000 seconds 00:19:09.560 00:19:09.560 Latency(us) 00:19:09.560 [2024-12-10T03:06:08.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.560 [2024-12-10T03:06:08.846Z] =================================================================================================================== 00:19:09.560 [2024-12-10T03:06:08.846Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.560 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75488 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.29uvJDZwgq 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.29uvJDZwgq 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.29uvJDZwgq 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.29uvJDZwgq 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.29uvJDZwgq 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77804 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77804 /var/tmp/bdevperf.sock 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 77804 ']' 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.819 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.819 [2024-12-10 04:06:08.902924] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:09.819 [2024-12-10 04:06:08.902971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77804 ] 00:19:09.819 [2024-12-10 04:06:08.976855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.819 [2024-12-10 04:06:09.017529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.078 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.078 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:10.078 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.29uvJDZwgq 00:19:10.078 [2024-12-10 04:06:09.269631] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.29uvJDZwgq': 0100666 00:19:10.078 [2024-12-10 04:06:09.269657] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:10.078 request: 00:19:10.078 { 00:19:10.078 "name": "key0", 00:19:10.078 "path": "/tmp/tmp.29uvJDZwgq", 00:19:10.078 "method": "keyring_file_add_key", 00:19:10.078 "req_id": 1 00:19:10.078 } 00:19:10.078 Got JSON-RPC error response 00:19:10.078 response: 00:19:10.078 { 00:19:10.078 "code": -1, 00:19:10.078 "message": "Operation not permitted" 00:19:10.078 } 00:19:10.078 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:10.337 [2024-12-10 04:06:09.466223] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.337 [2024-12-10 04:06:09.466251] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:10.337 request: 00:19:10.337 { 00:19:10.337 "name": "TLSTEST", 00:19:10.337 "trtype": "tcp", 00:19:10.337 "traddr": "10.0.0.2", 00:19:10.337 "adrfam": "ipv4", 00:19:10.337 "trsvcid": "4420", 00:19:10.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:10.337 "prchk_reftag": false, 00:19:10.337 "prchk_guard": false, 00:19:10.337 "hdgst": false, 00:19:10.337 "ddgst": false, 00:19:10.337 "psk": "key0", 00:19:10.337 "allow_unrecognized_csi": false, 00:19:10.337 "method": "bdev_nvme_attach_controller", 00:19:10.337 "req_id": 1 00:19:10.337 } 00:19:10.337 Got JSON-RPC error response 00:19:10.337 response: 00:19:10.337 { 00:19:10.337 "code": -126, 00:19:10.337 "message": "Required key not available" 00:19:10.337 } 00:19:10.337 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 77804 00:19:10.337 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 77804 ']' 00:19:10.337 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 77804 00:19:10.337 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:10.337 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.337 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77804 00:19:10.337 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:10.337 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:10.337 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77804' 00:19:10.337 killing process with pid 77804 00:19:10.337 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 77804 00:19:10.337 Received shutdown signal, test time was about 10.000000 seconds 00:19:10.337 00:19:10.337 Latency(us) 00:19:10.337 [2024-12-10T03:06:09.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.337 [2024-12-10T03:06:09.623Z] =================================================================================================================== 00:19:10.337 [2024-12-10T03:06:09.623Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:10.337 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 77804 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 75158 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75158 ']' 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75158 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75158 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75158' 00:19:10.596 killing process with pid 75158 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75158 00:19:10.596 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75158 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=77853 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 77853 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 77853 ']' 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.855 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.855 [2024-12-10 04:06:09.967727] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:10.855 [2024-12-10 04:06:09.967773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.855 [2024-12-10 04:06:10.033966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.855 [2024-12-10 04:06:10.076571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.855 [2024-12-10 04:06:10.076610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.855 [2024-12-10 04:06:10.076619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.855 [2024-12-10 04:06:10.076628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.855 [2024-12-10 04:06:10.076634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.855 [2024-12-10 04:06:10.077195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.29uvJDZwgq 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.29uvJDZwgq 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.29uvJDZwgq 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.29uvJDZwgq 00:19:11.114 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:11.114 [2024-12-10 04:06:10.390371] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.372 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:11.372 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:11.631 [2024-12-10 04:06:10.771367] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.631 [2024-12-10 04:06:10.771569] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.631 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:11.890 malloc0 00:19:11.890 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:11.890 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.29uvJDZwgq 00:19:12.148 [2024-12-10 04:06:11.312793] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.29uvJDZwgq': 0100666 00:19:12.148 [2024-12-10 04:06:11.312819] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:12.148 request: 00:19:12.148 { 00:19:12.148 "name": "key0", 00:19:12.148 "path": "/tmp/tmp.29uvJDZwgq", 00:19:12.148 "method": "keyring_file_add_key", 00:19:12.148 "req_id": 1 00:19:12.148 } 00:19:12.148 Got JSON-RPC error response 00:19:12.148 response: 00:19:12.148 { 00:19:12.148 "code": -1, 00:19:12.148 "message": "Operation not permitted" 00:19:12.148 } 00:19:12.148 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.407 [2024-12-10 04:06:11.497291] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:12.407 [2024-12-10 04:06:11.497327] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:12.407 request: 00:19:12.407 { 00:19:12.407 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.407 "host": "nqn.2016-06.io.spdk:host1", 00:19:12.407 "psk": "key0", 00:19:12.407 "method": "nvmf_subsystem_add_host", 00:19:12.407 "req_id": 1 00:19:12.407 } 00:19:12.407 Got JSON-RPC error response 00:19:12.407 response: 00:19:12.407 { 00:19:12.407 "code": -32603, 00:19:12.407 "message": "Internal error" 00:19:12.407 } 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 77853 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 77853 ']' 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 77853 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77853 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77853' 00:19:12.407 killing process with pid 77853 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 77853 00:19:12.407 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 77853 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.29uvJDZwgq 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=78305 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 78305 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 78305 ']' 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.666 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.666 [2024-12-10 04:06:11.781699] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:12.666 [2024-12-10 04:06:11.781744] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.666 [2024-12-10 04:06:11.853783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.666 [2024-12-10 04:06:11.892441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.666 [2024-12-10 04:06:11.892474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.666 [2024-12-10 04:06:11.892482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.666 [2024-12-10 04:06:11.892488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.666 [2024-12-10 04:06:11.892493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.666 [2024-12-10 04:06:11.892998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.925 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.925 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:12.925 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:12.925 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.925 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.925 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.925 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.29uvJDZwgq 00:19:12.925 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.29uvJDZwgq 00:19:12.925 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:12.925 [2024-12-10 04:06:12.185538] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.183 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:13.183 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:13.442 [2024-12-10 04:06:12.562539] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:13.442 [2024-12-10 04:06:12.562750] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.442 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:13.701 malloc0 00:19:13.701 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:13.959 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.29uvJDZwgq 00:19:13.959 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:14.218 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.218 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=78556 00:19:14.218 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.218 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 78556 /var/tmp/bdevperf.sock 00:19:14.218 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 78556 ']' 00:19:14.218 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.218 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.218 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.218 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.218 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.218 [2024-12-10 04:06:13.431776] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:14.218 [2024-12-10 04:06:13.431825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78556 ] 00:19:14.476 [2024-12-10 04:06:13.506937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.476 [2024-12-10 04:06:13.546245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.476 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.476 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:14.476 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.29uvJDZwgq 00:19:14.734 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:14.734 [2024-12-10 04:06:14.005774] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.992 TLSTESTn1 00:19:14.992 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:15.251 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:15.251 "subsystems": [ 00:19:15.251 { 00:19:15.251 "subsystem": "keyring", 00:19:15.251 "config": [ 00:19:15.251 { 00:19:15.251 "method": "keyring_file_add_key", 00:19:15.251 "params": { 00:19:15.251 "name": "key0", 00:19:15.251 "path": "/tmp/tmp.29uvJDZwgq" 00:19:15.251 } 00:19:15.251 } 00:19:15.251 ] 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "subsystem": "iobuf", 00:19:15.251 "config": [ 00:19:15.251 { 00:19:15.251 "method": "iobuf_set_options", 00:19:15.251 "params": { 00:19:15.251 "small_pool_count": 8192, 00:19:15.251 "large_pool_count": 1024, 00:19:15.251 "small_bufsize": 8192, 00:19:15.251 "large_bufsize": 135168, 00:19:15.251 "enable_numa": false 00:19:15.251 } 00:19:15.251 } 00:19:15.251 ] 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "subsystem": "sock", 00:19:15.251 "config": [ 00:19:15.251 { 00:19:15.251 "method": "sock_set_default_impl", 00:19:15.251 "params": { 00:19:15.251 "impl_name": "posix" 00:19:15.251 } 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "method": "sock_impl_set_options", 00:19:15.251 "params": { 00:19:15.251 "impl_name": "ssl", 00:19:15.251 "recv_buf_size": 4096, 00:19:15.251 "send_buf_size": 4096, 00:19:15.251 "enable_recv_pipe": true, 00:19:15.251 "enable_quickack": false, 00:19:15.251 "enable_placement_id": 0, 00:19:15.251 "enable_zerocopy_send_server": true, 00:19:15.251 "enable_zerocopy_send_client": false, 00:19:15.251 "zerocopy_threshold": 0, 00:19:15.251 "tls_version": 0, 00:19:15.251 "enable_ktls": false 00:19:15.251 } 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "method": "sock_impl_set_options", 00:19:15.251 "params": { 00:19:15.251 "impl_name": "posix", 00:19:15.251 "recv_buf_size": 2097152, 00:19:15.251 "send_buf_size": 2097152, 00:19:15.251 "enable_recv_pipe": true, 00:19:15.251 "enable_quickack": false, 00:19:15.251 "enable_placement_id": 0, 00:19:15.251 "enable_zerocopy_send_server": true, 00:19:15.251 "enable_zerocopy_send_client": false, 00:19:15.251 "zerocopy_threshold": 0, 00:19:15.251 "tls_version": 0, 00:19:15.251 "enable_ktls": false 00:19:15.251 } 00:19:15.251 } 00:19:15.251 ] 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "subsystem": "vmd", 00:19:15.251 "config": [] 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "subsystem": "accel", 00:19:15.251 "config": [ 00:19:15.251 { 00:19:15.251 "method": "accel_set_options", 00:19:15.251 "params": { 00:19:15.251 "small_cache_size": 128, 00:19:15.251 "large_cache_size": 16, 00:19:15.251 "task_count": 2048, 00:19:15.251 "sequence_count": 2048, 00:19:15.251 "buf_count": 2048 00:19:15.251 } 00:19:15.251 } 00:19:15.251 ] 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "subsystem": "bdev", 00:19:15.251 "config": [ 00:19:15.251 { 00:19:15.251 "method": "bdev_set_options", 00:19:15.251 "params": { 00:19:15.251 "bdev_io_pool_size": 65535, 00:19:15.251 "bdev_io_cache_size": 256, 00:19:15.251 "bdev_auto_examine": true, 00:19:15.251 "iobuf_small_cache_size": 128, 00:19:15.251 "iobuf_large_cache_size": 16 00:19:15.251 } 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "method": "bdev_raid_set_options", 00:19:15.251 "params": { 00:19:15.251 "process_window_size_kb": 1024, 00:19:15.251 "process_max_bandwidth_mb_sec": 0 00:19:15.251 } 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "method": "bdev_iscsi_set_options", 00:19:15.251 "params": { 00:19:15.251 "timeout_sec": 30 00:19:15.251 } 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "method": "bdev_nvme_set_options", 00:19:15.251 "params": { 00:19:15.251 "action_on_timeout": "none", 00:19:15.251 "timeout_us": 0, 00:19:15.251 "timeout_admin_us": 0, 00:19:15.251 "keep_alive_timeout_ms": 10000, 00:19:15.251 "arbitration_burst": 0, 00:19:15.251 "low_priority_weight": 0, 00:19:15.251 "medium_priority_weight": 0, 00:19:15.251 "high_priority_weight": 0, 00:19:15.251 "nvme_adminq_poll_period_us": 10000, 00:19:15.251 "nvme_ioq_poll_period_us": 0, 00:19:15.251 "io_queue_requests": 0, 00:19:15.251 "delay_cmd_submit": true, 00:19:15.251 "transport_retry_count": 4, 00:19:15.251 "bdev_retry_count": 3, 00:19:15.251 "transport_ack_timeout": 0, 00:19:15.251 "ctrlr_loss_timeout_sec": 0, 00:19:15.251 "reconnect_delay_sec": 0, 00:19:15.251 "fast_io_fail_timeout_sec": 0, 00:19:15.251 "disable_auto_failback": false, 00:19:15.251 "generate_uuids": false, 00:19:15.251 "transport_tos": 0, 00:19:15.251 "nvme_error_stat": false, 00:19:15.251 "rdma_srq_size": 0, 00:19:15.251 "io_path_stat": false, 00:19:15.251 "allow_accel_sequence": false, 00:19:15.251 "rdma_max_cq_size": 0, 00:19:15.251 "rdma_cm_event_timeout_ms": 0, 00:19:15.251 "dhchap_digests": [ 00:19:15.251 "sha256", 00:19:15.251 "sha384", 00:19:15.251 "sha512" 00:19:15.251 ], 00:19:15.251 "dhchap_dhgroups": [ 00:19:15.251 "null", 00:19:15.251 "ffdhe2048", 00:19:15.251 "ffdhe3072", 00:19:15.251 "ffdhe4096", 00:19:15.251 "ffdhe6144", 00:19:15.251 "ffdhe8192" 00:19:15.251 ] 00:19:15.251 } 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "method": "bdev_nvme_set_hotplug", 00:19:15.251 "params": { 00:19:15.251 "period_us": 100000, 00:19:15.251 "enable": false 00:19:15.251 } 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "method": "bdev_malloc_create", 00:19:15.251 "params": { 00:19:15.251 "name": "malloc0", 00:19:15.251 "num_blocks": 8192, 00:19:15.251 "block_size": 4096, 00:19:15.251 "physical_block_size": 4096, 00:19:15.251 "uuid": "617fe4ef-1100-4cab-a95f-9c94f2deac34", 00:19:15.251 "optimal_io_boundary": 0, 00:19:15.251 "md_size": 0, 00:19:15.251 "dif_type": 0, 00:19:15.251 "dif_is_head_of_md": false, 00:19:15.251 "dif_pi_format": 0 00:19:15.251 } 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "method": "bdev_wait_for_examine" 00:19:15.251 } 00:19:15.251 ] 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "subsystem": "nbd", 00:19:15.251 "config": [] 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "subsystem": "scheduler", 00:19:15.251 "config": [ 00:19:15.251 { 00:19:15.251 "method": "framework_set_scheduler", 00:19:15.251 "params": { 00:19:15.251 "name": "static" 00:19:15.251 } 00:19:15.251 } 00:19:15.251 ] 00:19:15.251 }, 00:19:15.251 { 00:19:15.251 "subsystem": "nvmf", 00:19:15.251 "config": [ 00:19:15.251 { 00:19:15.251 "method": "nvmf_set_config", 00:19:15.251 "params": { 00:19:15.251 "discovery_filter": "match_any", 00:19:15.251 "admin_cmd_passthru": { 00:19:15.251 "identify_ctrlr": false 00:19:15.251 }, 00:19:15.251 "dhchap_digests": [ 00:19:15.251 "sha256", 00:19:15.251 "sha384", 00:19:15.251 "sha512" 00:19:15.251 ], 00:19:15.251 "dhchap_dhgroups": [ 00:19:15.251 "null", 00:19:15.251 "ffdhe2048", 00:19:15.251 "ffdhe3072", 00:19:15.251 "ffdhe4096", 00:19:15.251 "ffdhe6144", 00:19:15.251 "ffdhe8192" 00:19:15.252 ] 00:19:15.252 } 00:19:15.252 }, 00:19:15.252 { 00:19:15.252 "method": "nvmf_set_max_subsystems", 00:19:15.252 "params": { 00:19:15.252 "max_subsystems": 1024 00:19:15.252 } 00:19:15.252 }, 00:19:15.252 { 00:19:15.252 "method": "nvmf_set_crdt", 00:19:15.252 "params": { 00:19:15.252 "crdt1": 0, 00:19:15.252 "crdt2": 0, 00:19:15.252 "crdt3": 0 00:19:15.252 } 00:19:15.252 }, 00:19:15.252 { 00:19:15.252 "method": "nvmf_create_transport", 00:19:15.252 "params": { 00:19:15.252 "trtype": "TCP", 00:19:15.252 "max_queue_depth": 128, 00:19:15.252 "max_io_qpairs_per_ctrlr": 127, 00:19:15.252 "in_capsule_data_size": 4096, 00:19:15.252 "max_io_size": 131072, 00:19:15.252 "io_unit_size": 131072, 00:19:15.252 "max_aq_depth": 128, 00:19:15.252 "num_shared_buffers": 511, 00:19:15.252 "buf_cache_size": 4294967295, 00:19:15.252 "dif_insert_or_strip": false, 00:19:15.252 "zcopy": false, 00:19:15.252 "c2h_success": false, 00:19:15.252 "sock_priority": 0, 00:19:15.252 "abort_timeout_sec": 1, 00:19:15.252 "ack_timeout": 0, 00:19:15.252 "data_wr_pool_size": 0 00:19:15.252 } 00:19:15.252 }, 00:19:15.252 { 00:19:15.252 "method": "nvmf_create_subsystem", 00:19:15.252 "params": { 00:19:15.252 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.252 "allow_any_host": false, 00:19:15.252 "serial_number": "SPDK00000000000001", 00:19:15.252 "model_number": "SPDK bdev Controller", 00:19:15.252 "max_namespaces": 10, 00:19:15.252 "min_cntlid": 1, 00:19:15.252 "max_cntlid": 65519, 00:19:15.252 "ana_reporting": false 00:19:15.252 } 00:19:15.252 }, 00:19:15.252 { 00:19:15.252 "method": "nvmf_subsystem_add_host", 00:19:15.252 "params": { 00:19:15.252 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.252 "host": "nqn.2016-06.io.spdk:host1", 00:19:15.252 "psk": "key0" 00:19:15.252 } 00:19:15.252 }, 00:19:15.252 { 00:19:15.252 "method": "nvmf_subsystem_add_ns", 00:19:15.252 "params": { 00:19:15.252 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.252 "namespace": { 00:19:15.252 "nsid": 1, 00:19:15.252 "bdev_name": "malloc0", 00:19:15.252 "nguid": "617FE4EF11004CABA95F9C94F2DEAC34", 00:19:15.252 "uuid": "617fe4ef-1100-4cab-a95f-9c94f2deac34", 00:19:15.252 "no_auto_visible": false 00:19:15.252 } 00:19:15.252 } 00:19:15.252 }, 00:19:15.252 { 00:19:15.252 "method": "nvmf_subsystem_add_listener", 00:19:15.252 "params": { 00:19:15.252 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.252 "listen_address": { 00:19:15.252 "trtype": "TCP", 00:19:15.252 "adrfam": "IPv4", 00:19:15.252 "traddr": "10.0.0.2", 00:19:15.252 "trsvcid": "4420" 00:19:15.252 }, 00:19:15.252 "secure_channel": true 00:19:15.252 } 00:19:15.252 } 00:19:15.252 ] 00:19:15.252 } 00:19:15.252 ] 00:19:15.252 }' 00:19:15.252 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:15.511 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:15.511 "subsystems": [ 00:19:15.511 { 00:19:15.511 "subsystem": "keyring", 00:19:15.511 "config": [ 00:19:15.511 { 00:19:15.511 "method": "keyring_file_add_key", 00:19:15.511 "params": { 00:19:15.511 "name": "key0", 00:19:15.511 "path": "/tmp/tmp.29uvJDZwgq" 00:19:15.511 } 00:19:15.511 } 00:19:15.511 ] 00:19:15.511 }, 00:19:15.511 { 00:19:15.511 "subsystem": "iobuf", 00:19:15.511 "config": [ 00:19:15.511 { 00:19:15.511 "method": "iobuf_set_options", 00:19:15.511 "params": { 00:19:15.511 "small_pool_count": 8192, 00:19:15.511 "large_pool_count": 1024, 00:19:15.511 "small_bufsize": 8192, 00:19:15.511 "large_bufsize": 135168, 00:19:15.511 "enable_numa": false 00:19:15.511 } 00:19:15.511 } 00:19:15.511 ] 00:19:15.511 }, 00:19:15.511 { 00:19:15.511 "subsystem": "sock", 00:19:15.511 "config": [ 00:19:15.511 { 00:19:15.511 "method": "sock_set_default_impl", 00:19:15.511 "params": { 00:19:15.511 "impl_name": "posix" 00:19:15.511 } 00:19:15.511 }, 00:19:15.511 { 00:19:15.511 "method": "sock_impl_set_options", 00:19:15.511 "params": { 00:19:15.511 "impl_name": "ssl", 00:19:15.511 "recv_buf_size": 4096, 00:19:15.511 "send_buf_size": 4096, 00:19:15.511 "enable_recv_pipe": true, 00:19:15.511 "enable_quickack": false, 00:19:15.511 "enable_placement_id": 0, 00:19:15.511 "enable_zerocopy_send_server": true, 00:19:15.511 "enable_zerocopy_send_client": false, 00:19:15.511 "zerocopy_threshold": 0, 00:19:15.511 "tls_version": 0, 00:19:15.511 "enable_ktls": false 00:19:15.511 } 00:19:15.511 }, 00:19:15.511 { 00:19:15.511 "method": "sock_impl_set_options", 00:19:15.511 "params": { 00:19:15.511 "impl_name": "posix", 00:19:15.511 "recv_buf_size": 2097152, 00:19:15.511 "send_buf_size": 2097152, 00:19:15.511 "enable_recv_pipe": true, 00:19:15.511 "enable_quickack": false, 00:19:15.511 "enable_placement_id": 0, 00:19:15.511 "enable_zerocopy_send_server": true, 00:19:15.511 "enable_zerocopy_send_client": false, 00:19:15.511 "zerocopy_threshold": 0, 00:19:15.511 "tls_version": 0, 00:19:15.511 "enable_ktls": false 00:19:15.511 } 00:19:15.511 } 00:19:15.511 ] 00:19:15.511 }, 00:19:15.511 { 00:19:15.511 "subsystem": "vmd", 00:19:15.511 "config": [] 00:19:15.511 }, 00:19:15.511 { 00:19:15.511 "subsystem": "accel", 00:19:15.511 "config": [ 00:19:15.511 { 00:19:15.511 "method": "accel_set_options", 00:19:15.511 "params": { 00:19:15.511 "small_cache_size": 128, 00:19:15.511 "large_cache_size": 16, 00:19:15.511 "task_count": 2048, 00:19:15.511 "sequence_count": 2048, 00:19:15.511 "buf_count": 2048 00:19:15.511 } 00:19:15.511 } 00:19:15.511 ] 00:19:15.511 }, 00:19:15.511 { 00:19:15.511 "subsystem": "bdev", 00:19:15.511 "config": [ 00:19:15.511 { 00:19:15.511 "method": "bdev_set_options", 00:19:15.511 "params": { 00:19:15.511 "bdev_io_pool_size": 65535, 00:19:15.511 "bdev_io_cache_size": 256, 00:19:15.511 "bdev_auto_examine": true, 00:19:15.511 "iobuf_small_cache_size": 128, 00:19:15.511 "iobuf_large_cache_size": 16 00:19:15.511 } 00:19:15.511 }, 00:19:15.511 { 00:19:15.511 "method": "bdev_raid_set_options", 00:19:15.511 "params": { 00:19:15.511 "process_window_size_kb": 1024, 00:19:15.511 "process_max_bandwidth_mb_sec": 0 00:19:15.511 } 00:19:15.511 }, 00:19:15.511 { 00:19:15.511 "method": "bdev_iscsi_set_options", 00:19:15.511 "params": { 00:19:15.511 "timeout_sec": 30 00:19:15.511 } 00:19:15.511 }, 00:19:15.511 { 00:19:15.511 "method": "bdev_nvme_set_options", 00:19:15.511 "params": { 00:19:15.511 "action_on_timeout": "none", 00:19:15.511 "timeout_us": 0, 00:19:15.511 "timeout_admin_us": 0, 00:19:15.511 "keep_alive_timeout_ms": 10000, 00:19:15.511 "arbitration_burst": 0, 00:19:15.511 "low_priority_weight": 0, 00:19:15.511 "medium_priority_weight": 0, 00:19:15.511 "high_priority_weight": 0, 00:19:15.511 "nvme_adminq_poll_period_us": 10000, 00:19:15.511 "nvme_ioq_poll_period_us": 0, 00:19:15.511 "io_queue_requests": 512, 00:19:15.511 "delay_cmd_submit": true, 00:19:15.511 "transport_retry_count": 4, 00:19:15.511 "bdev_retry_count": 3, 00:19:15.511 "transport_ack_timeout": 0, 00:19:15.511 "ctrlr_loss_timeout_sec": 0, 00:19:15.511 "reconnect_delay_sec": 0, 00:19:15.511 "fast_io_fail_timeout_sec": 0, 00:19:15.511 "disable_auto_failback": false, 00:19:15.511 "generate_uuids": false, 00:19:15.511 "transport_tos": 0, 00:19:15.511 "nvme_error_stat": false, 00:19:15.511 "rdma_srq_size": 0, 00:19:15.511 "io_path_stat": false, 00:19:15.511 "allow_accel_sequence": false, 00:19:15.511 "rdma_max_cq_size": 0, 00:19:15.511 "rdma_cm_event_timeout_ms": 0, 00:19:15.511 "dhchap_digests": [ 00:19:15.512 "sha256", 00:19:15.512 "sha384", 00:19:15.512 "sha512" 00:19:15.512 ], 00:19:15.512 "dhchap_dhgroups": [ 00:19:15.512 "null", 00:19:15.512 "ffdhe2048", 00:19:15.512 "ffdhe3072", 00:19:15.512 "ffdhe4096", 00:19:15.512 "ffdhe6144", 00:19:15.512 "ffdhe8192" 00:19:15.512 ] 00:19:15.512 } 00:19:15.512 }, 00:19:15.512 { 00:19:15.512 "method": "bdev_nvme_attach_controller", 00:19:15.512 "params": { 00:19:15.512 "name": "TLSTEST", 00:19:15.512 "trtype": "TCP", 00:19:15.512 "adrfam": "IPv4", 00:19:15.512 "traddr": "10.0.0.2", 00:19:15.512 "trsvcid": "4420", 00:19:15.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.512 "prchk_reftag": false, 00:19:15.512 "prchk_guard": false, 00:19:15.512 "ctrlr_loss_timeout_sec": 0, 00:19:15.512 "reconnect_delay_sec": 0, 00:19:15.512 "fast_io_fail_timeout_sec": 0, 00:19:15.512 "psk": "key0", 00:19:15.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.512 "hdgst": false, 00:19:15.512 "ddgst": false, 00:19:15.512 "multipath": "multipath" 00:19:15.512 } 00:19:15.512 }, 00:19:15.512 { 00:19:15.512 "method": "bdev_nvme_set_hotplug", 00:19:15.512 "params": { 00:19:15.512 "period_us": 100000, 00:19:15.512 "enable": false 00:19:15.512 } 00:19:15.512 }, 00:19:15.512 { 00:19:15.512 "method": "bdev_wait_for_examine" 00:19:15.512 } 00:19:15.512 ] 00:19:15.512 }, 00:19:15.512 { 00:19:15.512 "subsystem": "nbd", 00:19:15.512 "config": [] 00:19:15.512 } 00:19:15.512 ] 00:19:15.512 }' 00:19:15.512 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 78556 00:19:15.512 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 78556 ']' 00:19:15.512 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 78556 00:19:15.512 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:15.512 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.512 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78556 00:19:15.512 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:15.512 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:15.512 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78556' 00:19:15.512 killing process with pid 78556 00:19:15.512 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 78556 00:19:15.512 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.512 00:19:15.512 Latency(us) 00:19:15.512 [2024-12-10T03:06:14.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.512 [2024-12-10T03:06:14.798Z] =================================================================================================================== 00:19:15.512 [2024-12-10T03:06:14.798Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.512 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 78556 00:19:15.770 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 78305 00:19:15.770 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 78305 ']' 00:19:15.770 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 78305 00:19:15.770 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:15.770 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.770 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78305 00:19:15.770 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:15.770 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:15.770 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78305' 00:19:15.770 killing process with pid 78305 00:19:15.770 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 78305 00:19:15.770 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 78305 00:19:16.033 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:16.033 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.033 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.033 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.033 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:16.033 "subsystems": [ 00:19:16.033 { 00:19:16.033 "subsystem": "keyring", 00:19:16.033 "config": [ 00:19:16.033 { 00:19:16.033 "method": "keyring_file_add_key", 00:19:16.033 "params": { 00:19:16.033 "name": "key0", 00:19:16.033 "path": "/tmp/tmp.29uvJDZwgq" 00:19:16.033 } 00:19:16.033 } 00:19:16.033 ] 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "subsystem": "iobuf", 00:19:16.033 "config": [ 00:19:16.033 { 00:19:16.033 "method": "iobuf_set_options", 00:19:16.033 "params": { 00:19:16.033 "small_pool_count": 8192, 00:19:16.033 "large_pool_count": 1024, 00:19:16.033 "small_bufsize": 8192, 00:19:16.033 "large_bufsize": 135168, 00:19:16.033 "enable_numa": false 00:19:16.033 } 00:19:16.033 } 00:19:16.033 ] 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "subsystem": "sock", 00:19:16.033 "config": [ 00:19:16.033 { 00:19:16.033 "method": "sock_set_default_impl", 00:19:16.033 "params": { 00:19:16.033 "impl_name": "posix" 00:19:16.033 } 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "method": "sock_impl_set_options", 00:19:16.033 "params": { 00:19:16.033 "impl_name": "ssl", 00:19:16.033 "recv_buf_size": 4096, 00:19:16.033 "send_buf_size": 4096, 00:19:16.033 "enable_recv_pipe": true, 00:19:16.033 "enable_quickack": false, 00:19:16.033 "enable_placement_id": 0, 00:19:16.033 "enable_zerocopy_send_server": true, 00:19:16.033 "enable_zerocopy_send_client": false, 00:19:16.033 "zerocopy_threshold": 0, 00:19:16.033 "tls_version": 0, 00:19:16.033 "enable_ktls": false 00:19:16.033 } 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "method": "sock_impl_set_options", 00:19:16.033 "params": { 00:19:16.033 "impl_name": "posix", 00:19:16.033 "recv_buf_size": 2097152, 00:19:16.033 "send_buf_size": 2097152, 00:19:16.033 "enable_recv_pipe": true, 00:19:16.033 "enable_quickack": false, 00:19:16.033 "enable_placement_id": 0, 00:19:16.033 "enable_zerocopy_send_server": true, 00:19:16.033 "enable_zerocopy_send_client": false, 00:19:16.033 "zerocopy_threshold": 0, 00:19:16.033 "tls_version": 0, 00:19:16.033 "enable_ktls": false 00:19:16.033 } 00:19:16.033 } 00:19:16.033 ] 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "subsystem": "vmd", 00:19:16.033 "config": [] 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "subsystem": "accel", 00:19:16.033 "config": [ 00:19:16.033 { 00:19:16.033 "method": "accel_set_options", 00:19:16.033 "params": { 00:19:16.033 "small_cache_size": 128, 00:19:16.033 "large_cache_size": 16, 00:19:16.033 "task_count": 2048, 00:19:16.033 "sequence_count": 2048, 00:19:16.033 "buf_count": 2048 00:19:16.033 } 00:19:16.033 } 00:19:16.033 ] 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "subsystem": "bdev", 00:19:16.033 "config": [ 00:19:16.033 { 00:19:16.033 "method": "bdev_set_options", 00:19:16.033 "params": { 00:19:16.033 "bdev_io_pool_size": 65535, 00:19:16.033 "bdev_io_cache_size": 256, 00:19:16.033 "bdev_auto_examine": true, 00:19:16.033 "iobuf_small_cache_size": 128, 00:19:16.033 "iobuf_large_cache_size": 16 00:19:16.033 } 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "method": "bdev_raid_set_options", 00:19:16.033 "params": { 00:19:16.033 "process_window_size_kb": 1024, 00:19:16.033 "process_max_bandwidth_mb_sec": 0 00:19:16.033 } 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "method": "bdev_iscsi_set_options", 00:19:16.033 "params": { 00:19:16.033 "timeout_sec": 30 00:19:16.033 } 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "method": "bdev_nvme_set_options", 00:19:16.033 "params": { 00:19:16.033 "action_on_timeout": "none", 00:19:16.033 "timeout_us": 0, 00:19:16.033 "timeout_admin_us": 0, 00:19:16.033 "keep_alive_timeout_ms": 10000, 00:19:16.033 "arbitration_burst": 0, 00:19:16.033 "low_priority_weight": 0, 00:19:16.033 "medium_priority_weight": 0, 00:19:16.033 "high_priority_weight": 0, 00:19:16.033 "nvme_adminq_poll_period_us": 10000, 00:19:16.033 "nvme_ioq_poll_period_us": 0, 00:19:16.033 "io_queue_requests": 0, 00:19:16.033 "delay_cmd_submit": true, 00:19:16.033 "transport_retry_count": 4, 00:19:16.033 "bdev_retry_count": 3, 00:19:16.033 "transport_ack_timeout": 0, 00:19:16.033 "ctrlr_loss_timeout_sec": 0, 00:19:16.033 "reconnect_delay_sec": 0, 00:19:16.033 "fast_io_fail_timeout_sec": 0, 00:19:16.033 "disable_auto_failback": false, 00:19:16.033 "generate_uuids": false, 00:19:16.033 "transport_tos": 0, 00:19:16.033 "nvme_error_stat": false, 00:19:16.033 "rdma_srq_size": 0, 00:19:16.033 "io_path_stat": false, 00:19:16.033 "allow_accel_sequence": false, 00:19:16.033 "rdma_max_cq_size": 0, 00:19:16.033 "rdma_cm_event_timeout_ms": 0, 00:19:16.033 "dhchap_digests": [ 00:19:16.033 "sha256", 00:19:16.033 "sha384", 00:19:16.033 "sha512" 00:19:16.033 ], 00:19:16.033 "dhchap_dhgroups": [ 00:19:16.033 "null", 00:19:16.033 "ffdhe2048", 00:19:16.033 "ffdhe3072", 00:19:16.033 "ffdhe4096", 00:19:16.033 "ffdhe6144", 00:19:16.033 "ffdhe8192" 00:19:16.033 ] 00:19:16.033 } 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "method": "bdev_nvme_set_hotplug", 00:19:16.033 "params": { 00:19:16.033 "period_us": 100000, 00:19:16.033 "enable": false 00:19:16.033 } 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "method": "bdev_malloc_create", 00:19:16.033 "params": { 00:19:16.033 "name": "malloc0", 00:19:16.033 "num_blocks": 8192, 00:19:16.033 "block_size": 4096, 00:19:16.033 "physical_block_size": 4096, 00:19:16.033 "uuid": "617fe4ef-1100-4cab-a95f-9c94f2deac34", 00:19:16.033 "optimal_io_boundary": 0, 00:19:16.033 "md_size": 0, 00:19:16.033 "dif_type": 0, 00:19:16.033 "dif_is_head_of_md": false, 00:19:16.033 "dif_pi_format": 0 00:19:16.033 } 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "method": "bdev_wait_for_examine" 00:19:16.033 } 00:19:16.033 ] 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "subsystem": "nbd", 00:19:16.033 "config": [] 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "subsystem": "scheduler", 00:19:16.033 "config": [ 00:19:16.033 { 00:19:16.033 "method": "framework_set_scheduler", 00:19:16.033 "params": { 00:19:16.033 "name": "static" 00:19:16.033 } 00:19:16.033 } 00:19:16.033 ] 00:19:16.033 }, 00:19:16.033 { 00:19:16.033 "subsystem": "nvmf", 00:19:16.033 "config": [ 00:19:16.033 { 00:19:16.033 "method": "nvmf_set_config", 00:19:16.033 "params": { 00:19:16.033 "discovery_filter": "match_any", 00:19:16.033 "admin_cmd_passthru": { 00:19:16.033 "identify_ctrlr": false 00:19:16.033 }, 00:19:16.034 "dhchap_digests": [ 00:19:16.034 "sha256", 00:19:16.034 "sha384", 00:19:16.034 "sha512" 00:19:16.034 ], 00:19:16.034 "dhchap_dhgroups": [ 00:19:16.034 "null", 00:19:16.034 "ffdhe2048", 00:19:16.034 "ffdhe3072", 00:19:16.034 "ffdhe4096", 00:19:16.034 "ffdhe6144", 00:19:16.034 "ffdhe8192" 00:19:16.034 ] 00:19:16.034 } 00:19:16.034 }, 00:19:16.034 { 00:19:16.034 "method": "nvmf_set_max_subsystems", 00:19:16.034 "params": { 00:19:16.034 "max_subsystems": 1024 00:19:16.034 } 00:19:16.034 }, 00:19:16.034 { 00:19:16.034 "method": "nvmf_set_crdt", 00:19:16.034 "params": { 00:19:16.034 "crdt1": 0, 00:19:16.034 "crdt2": 0, 00:19:16.034 "crdt3": 0 00:19:16.034 } 00:19:16.034 }, 00:19:16.034 { 00:19:16.034 "method": "nvmf_create_transport", 00:19:16.034 "params": { 00:19:16.034 "trtype": "TCP", 00:19:16.034 "max_queue_depth": 128, 00:19:16.034 "max_io_qpairs_per_ctrlr": 127, 00:19:16.034 "in_capsule_data_size": 4096, 00:19:16.034 "max_io_size": 131072, 00:19:16.034 "io_unit_size": 131072, 00:19:16.034 "max_aq_depth": 128, 00:19:16.034 "num_shared_buffers": 511, 00:19:16.034 "buf_cache_size": 4294967295, 00:19:16.034 "dif_insert_or_strip": false, 00:19:16.034 "zcopy": false, 00:19:16.034 "c2h_success": false, 00:19:16.034 "sock_priority": 0, 00:19:16.034 "abort_timeout_sec": 1, 00:19:16.034 "ack_timeout": 0, 00:19:16.034 "data_wr_pool_size": 0 00:19:16.034 } 00:19:16.034 }, 00:19:16.034 { 00:19:16.034 "method": "nvmf_create_subsystem", 00:19:16.034 "params": { 00:19:16.034 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.034 "allow_any_host": false, 00:19:16.034 "serial_number": "SPDK00000000000001", 00:19:16.034 "model_number": "SPDK bdev Controller", 00:19:16.034 "max_namespaces": 10, 00:19:16.034 "min_cntlid": 1, 00:19:16.034 "max_cntlid": 65519, 00:19:16.034 "ana_reporting": false 00:19:16.034 } 00:19:16.034 }, 00:19:16.034 { 00:19:16.034 "method": "nvmf_subsystem_add_host", 00:19:16.034 "params": { 00:19:16.034 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.034 "host": "nqn.2016-06.io.spdk:host1", 00:19:16.034 "psk": "key0" 00:19:16.034 } 00:19:16.034 }, 00:19:16.034 { 00:19:16.034 "method": "nvmf_subsystem_add_ns", 00:19:16.034 "params": { 00:19:16.034 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.034 "namespace": { 00:19:16.034 "nsid": 1, 00:19:16.034 "bdev_name": "malloc0", 00:19:16.034 "nguid": "617FE4EF11004CABA95F9C94F2DEAC34", 00:19:16.034 "uuid": "617fe4ef-1100-4cab-a95f-9c94f2deac34", 00:19:16.034 "no_auto_visible": false 00:19:16.034 } 00:19:16.034 } 00:19:16.034 }, 00:19:16.034 { 00:19:16.034 "method": "nvmf_subsystem_add_listener", 00:19:16.034 "params": { 00:19:16.034 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.034 "listen_address": { 00:19:16.034 "trtype": "TCP", 00:19:16.034 "adrfam": "IPv4", 00:19:16.034 "traddr": "10.0.0.2", 00:19:16.034 "trsvcid": "4420" 00:19:16.034 }, 00:19:16.034 "secure_channel": true 00:19:16.034 } 00:19:16.034 } 00:19:16.034 ] 00:19:16.034 } 00:19:16.034 ] 00:19:16.034 }' 00:19:16.034 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=78806 00:19:16.034 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:16.034 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 78806 00:19:16.034 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 78806 ']' 00:19:16.034 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.034 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.034 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.034 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.034 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.034 [2024-12-10 04:06:15.110105] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:16.034 [2024-12-10 04:06:15.110147] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.034 [2024-12-10 04:06:15.185009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.034 [2024-12-10 04:06:15.223593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.034 [2024-12-10 04:06:15.223628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.034 [2024-12-10 04:06:15.223635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.034 [2024-12-10 04:06:15.223641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.034 [2024-12-10 04:06:15.223646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.034 [2024-12-10 04:06:15.224169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.293 [2024-12-10 04:06:15.435493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.293 [2024-12-10 04:06:15.467524] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:16.293 [2024-12-10 04:06:15.467733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=79036 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 79036 /var/tmp/bdevperf.sock 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 79036 ']' 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.860 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:16.860 "subsystems": [ 00:19:16.860 { 00:19:16.860 "subsystem": "keyring", 00:19:16.860 "config": [ 00:19:16.860 { 00:19:16.860 "method": "keyring_file_add_key", 00:19:16.860 "params": { 00:19:16.860 "name": "key0", 00:19:16.860 "path": "/tmp/tmp.29uvJDZwgq" 00:19:16.860 } 00:19:16.860 } 00:19:16.860 ] 00:19:16.860 }, 00:19:16.860 { 00:19:16.860 "subsystem": "iobuf", 00:19:16.860 "config": [ 00:19:16.860 { 00:19:16.860 "method": "iobuf_set_options", 00:19:16.860 "params": { 00:19:16.860 "small_pool_count": 8192, 00:19:16.860 "large_pool_count": 1024, 00:19:16.860 "small_bufsize": 8192, 00:19:16.860 "large_bufsize": 135168, 00:19:16.860 "enable_numa": false 00:19:16.860 } 00:19:16.860 } 00:19:16.860 ] 00:19:16.860 }, 00:19:16.860 { 00:19:16.860 "subsystem": "sock", 00:19:16.860 "config": [ 00:19:16.860 { 00:19:16.860 "method": "sock_set_default_impl", 00:19:16.860 "params": { 00:19:16.860 "impl_name": "posix" 00:19:16.860 } 00:19:16.860 }, 00:19:16.860 { 00:19:16.860 "method": "sock_impl_set_options", 00:19:16.860 "params": { 00:19:16.860 "impl_name": "ssl", 00:19:16.860 "recv_buf_size": 4096, 00:19:16.860 "send_buf_size": 4096, 00:19:16.860 "enable_recv_pipe": true, 00:19:16.860 "enable_quickack": false, 00:19:16.860 "enable_placement_id": 0, 00:19:16.860 "enable_zerocopy_send_server": true, 00:19:16.860 "enable_zerocopy_send_client": false, 00:19:16.860 "zerocopy_threshold": 0, 00:19:16.860 "tls_version": 0, 00:19:16.860 "enable_ktls": false 00:19:16.860 } 00:19:16.860 }, 00:19:16.860 { 00:19:16.860 "method": "sock_impl_set_options", 00:19:16.860 "params": { 00:19:16.860 "impl_name": "posix", 00:19:16.860 "recv_buf_size": 2097152, 00:19:16.860 "send_buf_size": 2097152, 00:19:16.860 "enable_recv_pipe": true, 00:19:16.860 "enable_quickack": false, 00:19:16.860 "enable_placement_id": 0, 00:19:16.860 "enable_zerocopy_send_server": true, 00:19:16.860 "enable_zerocopy_send_client": false, 00:19:16.860 "zerocopy_threshold": 0, 00:19:16.860 "tls_version": 0, 00:19:16.860 "enable_ktls": false 00:19:16.860 } 00:19:16.860 } 00:19:16.860 ] 00:19:16.860 }, 00:19:16.860 { 00:19:16.860 "subsystem": "vmd", 00:19:16.860 "config": [] 00:19:16.860 }, 00:19:16.860 { 00:19:16.860 "subsystem": "accel", 00:19:16.860 "config": [ 00:19:16.860 { 00:19:16.860 "method": "accel_set_options", 00:19:16.860 "params": { 00:19:16.860 "small_cache_size": 128, 00:19:16.860 "large_cache_size": 16, 00:19:16.860 "task_count": 2048, 00:19:16.860 "sequence_count": 2048, 00:19:16.860 "buf_count": 2048 00:19:16.860 } 00:19:16.860 } 00:19:16.860 ] 00:19:16.860 }, 00:19:16.860 { 00:19:16.860 "subsystem": "bdev", 00:19:16.860 "config": [ 00:19:16.860 { 00:19:16.860 "method": "bdev_set_options", 00:19:16.860 "params": { 00:19:16.860 "bdev_io_pool_size": 65535, 00:19:16.860 "bdev_io_cache_size": 256, 00:19:16.860 "bdev_auto_examine": true, 00:19:16.860 "iobuf_small_cache_size": 128, 00:19:16.860 "iobuf_large_cache_size": 16 00:19:16.860 } 00:19:16.860 }, 00:19:16.860 { 00:19:16.860 "method": "bdev_raid_set_options", 00:19:16.860 "params": { 00:19:16.860 "process_window_size_kb": 1024, 00:19:16.860 "process_max_bandwidth_mb_sec": 0 00:19:16.860 } 00:19:16.860 }, 00:19:16.860 { 00:19:16.860 "method": "bdev_iscsi_set_options", 00:19:16.860 "params": { 00:19:16.860 "timeout_sec": 30 00:19:16.860 } 00:19:16.860 }, 00:19:16.860 { 00:19:16.860 "method": "bdev_nvme_set_options", 00:19:16.860 "params": { 00:19:16.860 "action_on_timeout": "none", 00:19:16.860 "timeout_us": 0, 00:19:16.860 "timeout_admin_us": 0, 00:19:16.860 "keep_alive_timeout_ms": 10000, 00:19:16.860 "arbitration_burst": 0, 00:19:16.860 "low_priority_weight": 0, 00:19:16.860 "medium_priority_weight": 0, 00:19:16.860 "high_priority_weight": 0, 00:19:16.860 "nvme_adminq_poll_period_us": 10000, 00:19:16.860 "nvme_ioq_poll_period_us": 0, 00:19:16.860 "io_queue_requests": 512, 00:19:16.860 "delay_cmd_submit": true, 00:19:16.860 "transport_retry_count": 4, 00:19:16.860 "bdev_retry_count": 3, 00:19:16.860 "transport_ack_timeout": 0, 00:19:16.860 "ctrlr_loss_timeout_sec": 0, 00:19:16.860 "reconnect_delay_sec": 0, 00:19:16.860 "fast_io_fail_timeout_sec": 0, 00:19:16.860 "disable_auto_failback": false, 00:19:16.860 "generate_uuids": false, 00:19:16.860 "transport_tos": 0, 00:19:16.860 "nvme_error_stat": false, 00:19:16.860 "rdma_srq_size": 0, 00:19:16.860 "io_path_stat": false, 00:19:16.860 "allow_accel_sequence": false, 00:19:16.860 "rdma_max_cq_size": 0, 00:19:16.860 "rdma_cm_event_timeout_ms": 0, 00:19:16.860 "dhchap_digests": [ 00:19:16.860 "sha256", 00:19:16.860 "sha384", 00:19:16.860 "sha512" 00:19:16.860 ], 00:19:16.860 "dhchap_dhgroups": [ 00:19:16.860 "null", 00:19:16.860 "ffdhe2048", 00:19:16.860 "ffdhe3072", 00:19:16.860 "ffdhe4096", 00:19:16.860 "ffdhe6144", 00:19:16.860 "ffdhe8192" 00:19:16.860 ] 00:19:16.860 } 00:19:16.860 }, 00:19:16.861 { 00:19:16.861 "method": "bdev_nvme_attach_controller", 00:19:16.861 "params": { 00:19:16.861 "name": "TLSTEST", 00:19:16.861 "trtype": "TCP", 00:19:16.861 "adrfam": "IPv4", 00:19:16.861 "traddr": "10.0.0.2", 00:19:16.861 "trsvcid": "4420", 00:19:16.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.861 "prchk_reftag": false, 00:19:16.861 "prchk_guard": false, 00:19:16.861 "ctrlr_loss_timeout_sec": 0, 00:19:16.861 "reconnect_delay_sec": 0, 00:19:16.861 "fast_io_fail_timeout_sec": 0, 00:19:16.861 "psk": "key0", 00:19:16.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.861 "hdgst": false, 00:19:16.861 "ddgst": false, 00:19:16.861 "multipath": "multipath" 00:19:16.861 } 00:19:16.861 }, 00:19:16.861 { 00:19:16.861 "method": "bdev_nvme_set_hotplug", 00:19:16.861 "params": { 00:19:16.861 "period_us": 100000, 00:19:16.861 "enable": false 00:19:16.861 } 00:19:16.861 }, 00:19:16.861 { 00:19:16.861 "method": "bdev_wait_for_examine" 00:19:16.861 } 00:19:16.861 ] 00:19:16.861 }, 00:19:16.861 { 00:19:16.861 "subsystem": "nbd", 00:19:16.861 "config": [] 00:19:16.861 } 00:19:16.861 ] 00:19:16.861 }' 00:19:16.861 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.861 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.861 [2024-12-10 04:06:16.021913] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:16.861 [2024-12-10 04:06:16.021960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79036 ] 00:19:16.861 [2024-12-10 04:06:16.094531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.861 [2024-12-10 04:06:16.133211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.119 [2024-12-10 04:06:16.286683] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.685 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.685 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:17.685 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:17.685 Running I/O for 10 seconds... 00:19:19.996 5434.00 IOPS, 21.23 MiB/s [2024-12-10T03:06:20.217Z] 5512.00 IOPS, 21.53 MiB/s [2024-12-10T03:06:21.152Z] 5436.00 IOPS, 21.23 MiB/s [2024-12-10T03:06:22.086Z] 5328.50 IOPS, 20.81 MiB/s [2024-12-10T03:06:23.022Z] 5258.20 IOPS, 20.54 MiB/s [2024-12-10T03:06:24.396Z] 5213.17 IOPS, 20.36 MiB/s [2024-12-10T03:06:25.331Z] 5143.43 IOPS, 20.09 MiB/s [2024-12-10T03:06:26.265Z] 5124.25 IOPS, 20.02 MiB/s [2024-12-10T03:06:27.200Z] 5115.89 IOPS, 19.98 MiB/s [2024-12-10T03:06:27.200Z] 5104.90 IOPS, 19.94 MiB/s 00:19:27.914 Latency(us) 00:19:27.914 [2024-12-10T03:06:27.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.914 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:27.914 Verification LBA range: start 0x0 length 0x2000 00:19:27.914 TLSTESTn1 : 10.02 5108.99 19.96 0.00 0.00 25017.08 5648.58 30833.13 00:19:27.914 [2024-12-10T03:06:27.200Z] =================================================================================================================== 00:19:27.914 [2024-12-10T03:06:27.200Z] Total : 5108.99 19.96 0.00 0.00 25017.08 5648.58 30833.13 00:19:27.914 { 00:19:27.914 "results": [ 00:19:27.914 { 00:19:27.914 "job": "TLSTESTn1", 00:19:27.914 "core_mask": "0x4", 00:19:27.914 "workload": "verify", 00:19:27.914 "status": "finished", 00:19:27.914 "verify_range": { 00:19:27.914 "start": 0, 00:19:27.914 "length": 8192 00:19:27.914 }, 00:19:27.914 "queue_depth": 128, 00:19:27.914 "io_size": 4096, 00:19:27.914 "runtime": 10.016856, 00:19:27.914 "iops": 5108.98828933949, 00:19:27.914 "mibps": 19.95698550523238, 00:19:27.914 "io_failed": 0, 00:19:27.914 "io_timeout": 0, 00:19:27.914 "avg_latency_us": 25017.075630392224, 00:19:27.914 "min_latency_us": 5648.579047619048, 00:19:27.914 "max_latency_us": 30833.12761904762 00:19:27.914 } 00:19:27.914 ], 00:19:27.914 "core_count": 1 00:19:27.914 } 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 79036 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 79036 ']' 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 79036 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79036 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79036' 00:19:27.914 killing process with pid 79036 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 79036 00:19:27.914 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.914 00:19:27.914 Latency(us) 00:19:27.914 [2024-12-10T03:06:27.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.914 [2024-12-10T03:06:27.200Z] =================================================================================================================== 00:19:27.914 [2024-12-10T03:06:27.200Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.914 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 79036 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 78806 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 78806 ']' 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 78806 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78806 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78806' 00:19:28.173 killing process with pid 78806 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 78806 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 78806 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=80837 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 80837 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 80837 ']' 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.173 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.431 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.431 [2024-12-10 04:06:27.502593] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:28.431 [2024-12-10 04:06:27.502639] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.431 [2024-12-10 04:06:27.578734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.431 [2024-12-10 04:06:27.617133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.431 [2024-12-10 04:06:27.617172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.431 [2024-12-10 04:06:27.617180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.431 [2024-12-10 04:06:27.617186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.431 [2024-12-10 04:06:27.617191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.431 [2024-12-10 04:06:27.617694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.431 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.431 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.431 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.431 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.431 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.689 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.689 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.29uvJDZwgq 00:19:28.689 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.29uvJDZwgq 00:19:28.689 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.689 [2024-12-10 04:06:27.920407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.689 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:28.947 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:29.206 [2024-12-10 04:06:28.281320] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.206 [2024-12-10 04:06:28.281518] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.206 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:29.206 malloc0 00:19:29.206 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.464 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.29uvJDZwgq 00:19:29.723 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.982 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:29.982 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=81085 00:19:29.982 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.982 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 81085 /var/tmp/bdevperf.sock 00:19:29.982 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 81085 ']' 00:19:29.982 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.982 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.982 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.982 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.982 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.982 [2024-12-10 04:06:29.069883] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:29.982 [2024-12-10 04:06:29.069930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81085 ] 00:19:29.982 [2024-12-10 04:06:29.143115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.982 [2024-12-10 04:06:29.181743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.240 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.240 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:30.240 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.29uvJDZwgq 00:19:30.240 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:30.498 [2024-12-10 04:06:29.641575] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.499 nvme0n1 00:19:30.499 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:30.757 Running I/O for 1 seconds... 00:19:31.692 5360.00 IOPS, 20.94 MiB/s 00:19:31.692 Latency(us) 00:19:31.692 [2024-12-10T03:06:30.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.692 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:31.692 Verification LBA range: start 0x0 length 0x2000 00:19:31.692 nvme0n1 : 1.01 5410.79 21.14 0.00 0.00 23503.34 5430.13 20971.52 00:19:31.692 [2024-12-10T03:06:30.978Z] =================================================================================================================== 00:19:31.692 [2024-12-10T03:06:30.978Z] Total : 5410.79 21.14 0.00 0.00 23503.34 5430.13 20971.52 00:19:31.692 { 00:19:31.692 "results": [ 00:19:31.692 { 00:19:31.692 "job": "nvme0n1", 00:19:31.692 "core_mask": "0x2", 00:19:31.692 "workload": "verify", 00:19:31.692 "status": "finished", 00:19:31.692 "verify_range": { 00:19:31.692 "start": 0, 00:19:31.692 "length": 8192 00:19:31.692 }, 00:19:31.692 "queue_depth": 128, 00:19:31.692 "io_size": 4096, 00:19:31.692 "runtime": 1.01427, 00:19:31.692 "iops": 5410.788054462816, 00:19:31.692 "mibps": 21.135890837745375, 00:19:31.692 "io_failed": 0, 00:19:31.692 "io_timeout": 0, 00:19:31.692 "avg_latency_us": 23503.340452589197, 00:19:31.692 "min_latency_us": 5430.125714285714, 00:19:31.692 "max_latency_us": 20971.52 00:19:31.692 } 00:19:31.692 ], 00:19:31.692 "core_count": 1 00:19:31.692 } 00:19:31.692 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 81085 00:19:31.692 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 81085 ']' 00:19:31.692 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 81085 00:19:31.692 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:31.692 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.692 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81085 00:19:31.692 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:31.692 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:31.692 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81085' 00:19:31.692 killing process with pid 81085 00:19:31.692 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 81085 00:19:31.692 Received shutdown signal, test time was about 1.000000 seconds 00:19:31.692 00:19:31.692 Latency(us) 00:19:31.692 [2024-12-10T03:06:30.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.692 [2024-12-10T03:06:30.978Z] =================================================================================================================== 00:19:31.692 [2024-12-10T03:06:30.978Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.692 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 81085 00:19:31.951 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 80837 00:19:31.951 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 80837 ']' 00:19:31.951 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 80837 00:19:31.951 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:31.951 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.951 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80837 00:19:31.951 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.951 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.951 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80837' 00:19:31.951 killing process with pid 80837 00:19:31.951 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 80837 00:19:31.951 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 80837 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=81538 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 81538 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 81538 ']' 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.210 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.210 [2024-12-10 04:06:31.343412] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:32.210 [2024-12-10 04:06:31.343458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.210 [2024-12-10 04:06:31.418910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.210 [2024-12-10 04:06:31.457438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.210 [2024-12-10 04:06:31.457475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.210 [2024-12-10 04:06:31.457485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.210 [2024-12-10 04:06:31.457493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.210 [2024-12-10 04:06:31.457498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.210 [2024-12-10 04:06:31.457985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.469 [2024-12-10 04:06:31.592723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.469 malloc0 00:19:32.469 [2024-12-10 04:06:31.620665] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.469 [2024-12-10 04:06:31.620849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=81565 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 81565 /var/tmp/bdevperf.sock 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 81565 ']' 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.469 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.469 [2024-12-10 04:06:31.696657] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:32.469 [2024-12-10 04:06:31.696696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81565 ] 00:19:32.728 [2024-12-10 04:06:31.772503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.728 [2024-12-10 04:06:31.812927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.728 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.728 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.728 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.29uvJDZwgq 00:19:32.986 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:33.244 [2024-12-10 04:06:32.269591] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.244 nvme0n1 00:19:33.244 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:33.244 Running I/O for 1 seconds... 00:19:34.179 5383.00 IOPS, 21.03 MiB/s 00:19:34.179 Latency(us) 00:19:34.179 [2024-12-10T03:06:33.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.179 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:34.179 Verification LBA range: start 0x0 length 0x2000 00:19:34.179 nvme0n1 : 1.02 5419.88 21.17 0.00 0.00 23418.38 5867.03 24341.94 00:19:34.179 [2024-12-10T03:06:33.465Z] =================================================================================================================== 00:19:34.179 [2024-12-10T03:06:33.465Z] Total : 5419.88 21.17 0.00 0.00 23418.38 5867.03 24341.94 00:19:34.179 { 00:19:34.179 "results": [ 00:19:34.179 { 00:19:34.179 "job": "nvme0n1", 00:19:34.179 "core_mask": "0x2", 00:19:34.179 "workload": "verify", 00:19:34.179 "status": "finished", 00:19:34.179 "verify_range": { 00:19:34.179 "start": 0, 00:19:34.179 "length": 8192 00:19:34.179 }, 00:19:34.179 "queue_depth": 128, 00:19:34.179 "io_size": 4096, 00:19:34.179 "runtime": 1.016813, 00:19:34.179 "iops": 5419.875631015732, 00:19:34.179 "mibps": 21.171389183655204, 00:19:34.179 "io_failed": 0, 00:19:34.179 "io_timeout": 0, 00:19:34.179 "avg_latency_us": 23418.384771582376, 00:19:34.179 "min_latency_us": 5867.032380952381, 00:19:34.179 "max_latency_us": 24341.942857142858 00:19:34.179 } 00:19:34.179 ], 00:19:34.179 "core_count": 1 00:19:34.179 } 00:19:34.437 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:34.437 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.437 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.437 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.437 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:34.437 "subsystems": [ 00:19:34.437 { 00:19:34.437 "subsystem": "keyring", 00:19:34.437 "config": [ 00:19:34.437 { 00:19:34.438 "method": "keyring_file_add_key", 00:19:34.438 "params": { 00:19:34.438 "name": "key0", 00:19:34.438 "path": "/tmp/tmp.29uvJDZwgq" 00:19:34.438 } 00:19:34.438 } 00:19:34.438 ] 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "subsystem": "iobuf", 00:19:34.438 "config": [ 00:19:34.438 { 00:19:34.438 "method": "iobuf_set_options", 00:19:34.438 "params": { 00:19:34.438 "small_pool_count": 8192, 00:19:34.438 "large_pool_count": 1024, 00:19:34.438 "small_bufsize": 8192, 00:19:34.438 "large_bufsize": 135168, 00:19:34.438 "enable_numa": false 00:19:34.438 } 00:19:34.438 } 00:19:34.438 ] 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "subsystem": "sock", 00:19:34.438 "config": [ 00:19:34.438 { 00:19:34.438 "method": "sock_set_default_impl", 00:19:34.438 "params": { 00:19:34.438 "impl_name": "posix" 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "sock_impl_set_options", 00:19:34.438 "params": { 00:19:34.438 "impl_name": "ssl", 00:19:34.438 "recv_buf_size": 4096, 00:19:34.438 "send_buf_size": 4096, 00:19:34.438 "enable_recv_pipe": true, 00:19:34.438 "enable_quickack": false, 00:19:34.438 "enable_placement_id": 0, 00:19:34.438 "enable_zerocopy_send_server": true, 00:19:34.438 "enable_zerocopy_send_client": false, 00:19:34.438 "zerocopy_threshold": 0, 00:19:34.438 "tls_version": 0, 00:19:34.438 "enable_ktls": false 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "sock_impl_set_options", 00:19:34.438 "params": { 00:19:34.438 "impl_name": "posix", 00:19:34.438 "recv_buf_size": 2097152, 00:19:34.438 "send_buf_size": 2097152, 00:19:34.438 "enable_recv_pipe": true, 00:19:34.438 "enable_quickack": false, 00:19:34.438 "enable_placement_id": 0, 00:19:34.438 "enable_zerocopy_send_server": true, 00:19:34.438 "enable_zerocopy_send_client": false, 00:19:34.438 "zerocopy_threshold": 0, 00:19:34.438 "tls_version": 0, 00:19:34.438 "enable_ktls": false 00:19:34.438 } 00:19:34.438 } 00:19:34.438 ] 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "subsystem": "vmd", 00:19:34.438 "config": [] 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "subsystem": "accel", 00:19:34.438 "config": [ 00:19:34.438 { 00:19:34.438 "method": "accel_set_options", 00:19:34.438 "params": { 00:19:34.438 "small_cache_size": 128, 00:19:34.438 "large_cache_size": 16, 00:19:34.438 "task_count": 2048, 00:19:34.438 "sequence_count": 2048, 00:19:34.438 "buf_count": 2048 00:19:34.438 } 00:19:34.438 } 00:19:34.438 ] 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "subsystem": "bdev", 00:19:34.438 "config": [ 00:19:34.438 { 00:19:34.438 "method": "bdev_set_options", 00:19:34.438 "params": { 00:19:34.438 "bdev_io_pool_size": 65535, 00:19:34.438 "bdev_io_cache_size": 256, 00:19:34.438 "bdev_auto_examine": true, 00:19:34.438 "iobuf_small_cache_size": 128, 00:19:34.438 "iobuf_large_cache_size": 16 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "bdev_raid_set_options", 00:19:34.438 "params": { 00:19:34.438 "process_window_size_kb": 1024, 00:19:34.438 "process_max_bandwidth_mb_sec": 0 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "bdev_iscsi_set_options", 00:19:34.438 "params": { 00:19:34.438 "timeout_sec": 30 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "bdev_nvme_set_options", 00:19:34.438 "params": { 00:19:34.438 "action_on_timeout": "none", 00:19:34.438 "timeout_us": 0, 00:19:34.438 "timeout_admin_us": 0, 00:19:34.438 "keep_alive_timeout_ms": 10000, 00:19:34.438 "arbitration_burst": 0, 00:19:34.438 "low_priority_weight": 0, 00:19:34.438 "medium_priority_weight": 0, 00:19:34.438 "high_priority_weight": 0, 00:19:34.438 "nvme_adminq_poll_period_us": 10000, 00:19:34.438 "nvme_ioq_poll_period_us": 0, 00:19:34.438 "io_queue_requests": 0, 00:19:34.438 "delay_cmd_submit": true, 00:19:34.438 "transport_retry_count": 4, 00:19:34.438 "bdev_retry_count": 3, 00:19:34.438 "transport_ack_timeout": 0, 00:19:34.438 "ctrlr_loss_timeout_sec": 0, 00:19:34.438 "reconnect_delay_sec": 0, 00:19:34.438 "fast_io_fail_timeout_sec": 0, 00:19:34.438 "disable_auto_failback": false, 00:19:34.438 "generate_uuids": false, 00:19:34.438 "transport_tos": 0, 00:19:34.438 "nvme_error_stat": false, 00:19:34.438 "rdma_srq_size": 0, 00:19:34.438 "io_path_stat": false, 00:19:34.438 "allow_accel_sequence": false, 00:19:34.438 "rdma_max_cq_size": 0, 00:19:34.438 "rdma_cm_event_timeout_ms": 0, 00:19:34.438 "dhchap_digests": [ 00:19:34.438 "sha256", 00:19:34.438 "sha384", 00:19:34.438 "sha512" 00:19:34.438 ], 00:19:34.438 "dhchap_dhgroups": [ 00:19:34.438 "null", 00:19:34.438 "ffdhe2048", 00:19:34.438 "ffdhe3072", 00:19:34.438 "ffdhe4096", 00:19:34.438 "ffdhe6144", 00:19:34.438 "ffdhe8192" 00:19:34.438 ] 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "bdev_nvme_set_hotplug", 00:19:34.438 "params": { 00:19:34.438 "period_us": 100000, 00:19:34.438 "enable": false 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "bdev_malloc_create", 00:19:34.438 "params": { 00:19:34.438 "name": "malloc0", 00:19:34.438 "num_blocks": 8192, 00:19:34.438 "block_size": 4096, 00:19:34.438 "physical_block_size": 4096, 00:19:34.438 "uuid": "3bffeaf3-c17e-4343-a47c-1b274211f8b0", 00:19:34.438 "optimal_io_boundary": 0, 00:19:34.438 "md_size": 0, 00:19:34.438 "dif_type": 0, 00:19:34.438 "dif_is_head_of_md": false, 00:19:34.438 "dif_pi_format": 0 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "bdev_wait_for_examine" 00:19:34.438 } 00:19:34.438 ] 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "subsystem": "nbd", 00:19:34.438 "config": [] 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "subsystem": "scheduler", 00:19:34.438 "config": [ 00:19:34.438 { 00:19:34.438 "method": "framework_set_scheduler", 00:19:34.438 "params": { 00:19:34.438 "name": "static" 00:19:34.438 } 00:19:34.438 } 00:19:34.438 ] 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "subsystem": "nvmf", 00:19:34.438 "config": [ 00:19:34.438 { 00:19:34.438 "method": "nvmf_set_config", 00:19:34.438 "params": { 00:19:34.438 "discovery_filter": "match_any", 00:19:34.438 "admin_cmd_passthru": { 00:19:34.438 "identify_ctrlr": false 00:19:34.438 }, 00:19:34.438 "dhchap_digests": [ 00:19:34.438 "sha256", 00:19:34.438 "sha384", 00:19:34.438 "sha512" 00:19:34.438 ], 00:19:34.438 "dhchap_dhgroups": [ 00:19:34.438 "null", 00:19:34.438 "ffdhe2048", 00:19:34.438 "ffdhe3072", 00:19:34.438 "ffdhe4096", 00:19:34.438 "ffdhe6144", 00:19:34.438 "ffdhe8192" 00:19:34.438 ] 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "nvmf_set_max_subsystems", 00:19:34.438 "params": { 00:19:34.438 "max_subsystems": 1024 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "nvmf_set_crdt", 00:19:34.438 "params": { 00:19:34.438 "crdt1": 0, 00:19:34.438 "crdt2": 0, 00:19:34.438 "crdt3": 0 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "nvmf_create_transport", 00:19:34.438 "params": { 00:19:34.438 "trtype": "TCP", 00:19:34.438 "max_queue_depth": 128, 00:19:34.438 "max_io_qpairs_per_ctrlr": 127, 00:19:34.438 "in_capsule_data_size": 4096, 00:19:34.438 "max_io_size": 131072, 00:19:34.438 "io_unit_size": 131072, 00:19:34.438 "max_aq_depth": 128, 00:19:34.438 "num_shared_buffers": 511, 00:19:34.438 "buf_cache_size": 4294967295, 00:19:34.438 "dif_insert_or_strip": false, 00:19:34.438 "zcopy": false, 00:19:34.438 "c2h_success": false, 00:19:34.438 "sock_priority": 0, 00:19:34.438 "abort_timeout_sec": 1, 00:19:34.438 "ack_timeout": 0, 00:19:34.438 "data_wr_pool_size": 0 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.438 "method": "nvmf_create_subsystem", 00:19:34.438 "params": { 00:19:34.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.438 "allow_any_host": false, 00:19:34.438 "serial_number": "00000000000000000000", 00:19:34.438 "model_number": "SPDK bdev Controller", 00:19:34.438 "max_namespaces": 32, 00:19:34.438 "min_cntlid": 1, 00:19:34.438 "max_cntlid": 65519, 00:19:34.438 "ana_reporting": false 00:19:34.438 } 00:19:34.438 }, 00:19:34.438 { 00:19:34.439 "method": "nvmf_subsystem_add_host", 00:19:34.439 "params": { 00:19:34.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.439 "host": "nqn.2016-06.io.spdk:host1", 00:19:34.439 "psk": "key0" 00:19:34.439 } 00:19:34.439 }, 00:19:34.439 { 00:19:34.439 "method": "nvmf_subsystem_add_ns", 00:19:34.439 "params": { 00:19:34.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.439 "namespace": { 00:19:34.439 "nsid": 1, 00:19:34.439 "bdev_name": "malloc0", 00:19:34.439 "nguid": "3BFFEAF3C17E4343A47C1B274211F8B0", 00:19:34.439 "uuid": "3bffeaf3-c17e-4343-a47c-1b274211f8b0", 00:19:34.439 "no_auto_visible": false 00:19:34.439 } 00:19:34.439 } 00:19:34.439 }, 00:19:34.439 { 00:19:34.439 "method": "nvmf_subsystem_add_listener", 00:19:34.439 "params": { 00:19:34.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.439 "listen_address": { 00:19:34.439 "trtype": "TCP", 00:19:34.439 "adrfam": "IPv4", 00:19:34.439 "traddr": "10.0.0.2", 00:19:34.439 "trsvcid": "4420" 00:19:34.439 }, 00:19:34.439 "secure_channel": false, 00:19:34.439 "sock_impl": "ssl" 00:19:34.439 } 00:19:34.439 } 00:19:34.439 ] 00:19:34.439 } 00:19:34.439 ] 00:19:34.439 }' 00:19:34.439 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:34.698 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:34.698 "subsystems": [ 00:19:34.698 { 00:19:34.698 "subsystem": "keyring", 00:19:34.698 "config": [ 00:19:34.698 { 00:19:34.698 "method": "keyring_file_add_key", 00:19:34.698 "params": { 00:19:34.698 "name": "key0", 00:19:34.698 "path": "/tmp/tmp.29uvJDZwgq" 00:19:34.698 } 00:19:34.698 } 00:19:34.698 ] 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "subsystem": "iobuf", 00:19:34.698 "config": [ 00:19:34.698 { 00:19:34.698 "method": "iobuf_set_options", 00:19:34.698 "params": { 00:19:34.698 "small_pool_count": 8192, 00:19:34.698 "large_pool_count": 1024, 00:19:34.698 "small_bufsize": 8192, 00:19:34.698 "large_bufsize": 135168, 00:19:34.698 "enable_numa": false 00:19:34.698 } 00:19:34.698 } 00:19:34.698 ] 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "subsystem": "sock", 00:19:34.698 "config": [ 00:19:34.698 { 00:19:34.698 "method": "sock_set_default_impl", 00:19:34.698 "params": { 00:19:34.698 "impl_name": "posix" 00:19:34.698 } 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "method": "sock_impl_set_options", 00:19:34.698 "params": { 00:19:34.698 "impl_name": "ssl", 00:19:34.698 "recv_buf_size": 4096, 00:19:34.698 "send_buf_size": 4096, 00:19:34.698 "enable_recv_pipe": true, 00:19:34.698 "enable_quickack": false, 00:19:34.698 "enable_placement_id": 0, 00:19:34.698 "enable_zerocopy_send_server": true, 00:19:34.698 "enable_zerocopy_send_client": false, 00:19:34.698 "zerocopy_threshold": 0, 00:19:34.698 "tls_version": 0, 00:19:34.698 "enable_ktls": false 00:19:34.698 } 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "method": "sock_impl_set_options", 00:19:34.698 "params": { 00:19:34.698 "impl_name": "posix", 00:19:34.698 "recv_buf_size": 2097152, 00:19:34.698 "send_buf_size": 2097152, 00:19:34.698 "enable_recv_pipe": true, 00:19:34.698 "enable_quickack": false, 00:19:34.698 "enable_placement_id": 0, 00:19:34.698 "enable_zerocopy_send_server": true, 00:19:34.698 "enable_zerocopy_send_client": false, 00:19:34.698 "zerocopy_threshold": 0, 00:19:34.698 "tls_version": 0, 00:19:34.698 "enable_ktls": false 00:19:34.698 } 00:19:34.698 } 00:19:34.698 ] 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "subsystem": "vmd", 00:19:34.698 "config": [] 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "subsystem": "accel", 00:19:34.698 "config": [ 00:19:34.698 { 00:19:34.698 "method": "accel_set_options", 00:19:34.698 "params": { 00:19:34.698 "small_cache_size": 128, 00:19:34.698 "large_cache_size": 16, 00:19:34.698 "task_count": 2048, 00:19:34.698 "sequence_count": 2048, 00:19:34.698 "buf_count": 2048 00:19:34.698 } 00:19:34.698 } 00:19:34.698 ] 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "subsystem": "bdev", 00:19:34.698 "config": [ 00:19:34.698 { 00:19:34.698 "method": "bdev_set_options", 00:19:34.698 "params": { 00:19:34.698 "bdev_io_pool_size": 65535, 00:19:34.698 "bdev_io_cache_size": 256, 00:19:34.698 "bdev_auto_examine": true, 00:19:34.698 "iobuf_small_cache_size": 128, 00:19:34.698 "iobuf_large_cache_size": 16 00:19:34.698 } 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "method": "bdev_raid_set_options", 00:19:34.698 "params": { 00:19:34.698 "process_window_size_kb": 1024, 00:19:34.698 "process_max_bandwidth_mb_sec": 0 00:19:34.698 } 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "method": "bdev_iscsi_set_options", 00:19:34.698 "params": { 00:19:34.698 "timeout_sec": 30 00:19:34.698 } 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "method": "bdev_nvme_set_options", 00:19:34.698 "params": { 00:19:34.698 "action_on_timeout": "none", 00:19:34.698 "timeout_us": 0, 00:19:34.698 "timeout_admin_us": 0, 00:19:34.698 "keep_alive_timeout_ms": 10000, 00:19:34.698 "arbitration_burst": 0, 00:19:34.698 "low_priority_weight": 0, 00:19:34.698 "medium_priority_weight": 0, 00:19:34.698 "high_priority_weight": 0, 00:19:34.698 "nvme_adminq_poll_period_us": 10000, 00:19:34.698 "nvme_ioq_poll_period_us": 0, 00:19:34.698 "io_queue_requests": 512, 00:19:34.698 "delay_cmd_submit": true, 00:19:34.698 "transport_retry_count": 4, 00:19:34.698 "bdev_retry_count": 3, 00:19:34.698 "transport_ack_timeout": 0, 00:19:34.698 "ctrlr_loss_timeout_sec": 0, 00:19:34.698 "reconnect_delay_sec": 0, 00:19:34.698 "fast_io_fail_timeout_sec": 0, 00:19:34.698 "disable_auto_failback": false, 00:19:34.698 "generate_uuids": false, 00:19:34.698 "transport_tos": 0, 00:19:34.698 "nvme_error_stat": false, 00:19:34.698 "rdma_srq_size": 0, 00:19:34.698 "io_path_stat": false, 00:19:34.698 "allow_accel_sequence": false, 00:19:34.698 "rdma_max_cq_size": 0, 00:19:34.698 "rdma_cm_event_timeout_ms": 0, 00:19:34.698 "dhchap_digests": [ 00:19:34.698 "sha256", 00:19:34.698 "sha384", 00:19:34.698 "sha512" 00:19:34.698 ], 00:19:34.698 "dhchap_dhgroups": [ 00:19:34.698 "null", 00:19:34.698 "ffdhe2048", 00:19:34.698 "ffdhe3072", 00:19:34.698 "ffdhe4096", 00:19:34.698 "ffdhe6144", 00:19:34.698 "ffdhe8192" 00:19:34.698 ] 00:19:34.698 } 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "method": "bdev_nvme_attach_controller", 00:19:34.698 "params": { 00:19:34.698 "name": "nvme0", 00:19:34.698 "trtype": "TCP", 00:19:34.698 "adrfam": "IPv4", 00:19:34.698 "traddr": "10.0.0.2", 00:19:34.698 "trsvcid": "4420", 00:19:34.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.698 "prchk_reftag": false, 00:19:34.698 "prchk_guard": false, 00:19:34.698 "ctrlr_loss_timeout_sec": 0, 00:19:34.698 "reconnect_delay_sec": 0, 00:19:34.698 "fast_io_fail_timeout_sec": 0, 00:19:34.698 "psk": "key0", 00:19:34.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.698 "hdgst": false, 00:19:34.698 "ddgst": false, 00:19:34.698 "multipath": "multipath" 00:19:34.698 } 00:19:34.698 }, 00:19:34.698 { 00:19:34.698 "method": "bdev_nvme_set_hotplug", 00:19:34.698 "params": { 00:19:34.698 "period_us": 100000, 00:19:34.698 "enable": false 00:19:34.699 } 00:19:34.699 }, 00:19:34.699 { 00:19:34.699 "method": "bdev_enable_histogram", 00:19:34.699 "params": { 00:19:34.699 "name": "nvme0n1", 00:19:34.699 "enable": true 00:19:34.699 } 00:19:34.699 }, 00:19:34.699 { 00:19:34.699 "method": "bdev_wait_for_examine" 00:19:34.699 } 00:19:34.699 ] 00:19:34.699 }, 00:19:34.699 { 00:19:34.699 "subsystem": "nbd", 00:19:34.699 "config": [] 00:19:34.699 } 00:19:34.699 ] 00:19:34.699 }' 00:19:34.699 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 81565 00:19:34.699 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 81565 ']' 00:19:34.699 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 81565 00:19:34.699 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.699 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.699 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81565 00:19:34.699 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:34.699 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:34.699 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81565' 00:19:34.699 killing process with pid 81565 00:19:34.699 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 81565 00:19:34.699 Received shutdown signal, test time was about 1.000000 seconds 00:19:34.699 00:19:34.699 Latency(us) 00:19:34.699 [2024-12-10T03:06:33.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.699 [2024-12-10T03:06:33.985Z] =================================================================================================================== 00:19:34.699 [2024-12-10T03:06:33.985Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.699 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 81565 00:19:34.957 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 81538 00:19:34.957 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 81538 ']' 00:19:34.957 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 81538 00:19:34.957 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.957 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.957 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81538 00:19:34.957 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.957 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.957 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81538' 00:19:34.957 killing process with pid 81538 00:19:34.957 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 81538 00:19:34.957 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 81538 00:19:35.216 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:35.216 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:35.216 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:35.216 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:35.216 "subsystems": [ 00:19:35.216 { 00:19:35.216 "subsystem": "keyring", 00:19:35.216 "config": [ 00:19:35.216 { 00:19:35.216 "method": "keyring_file_add_key", 00:19:35.216 "params": { 00:19:35.216 "name": "key0", 00:19:35.216 "path": "/tmp/tmp.29uvJDZwgq" 00:19:35.216 } 00:19:35.216 } 00:19:35.216 ] 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "subsystem": "iobuf", 00:19:35.216 "config": [ 00:19:35.216 { 00:19:35.216 "method": "iobuf_set_options", 00:19:35.216 "params": { 00:19:35.216 "small_pool_count": 8192, 00:19:35.216 "large_pool_count": 1024, 00:19:35.216 "small_bufsize": 8192, 00:19:35.216 "large_bufsize": 135168, 00:19:35.216 "enable_numa": false 00:19:35.216 } 00:19:35.216 } 00:19:35.216 ] 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "subsystem": "sock", 00:19:35.216 "config": [ 00:19:35.216 { 00:19:35.216 "method": "sock_set_default_impl", 00:19:35.216 "params": { 00:19:35.216 "impl_name": "posix" 00:19:35.216 } 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "method": "sock_impl_set_options", 00:19:35.216 "params": { 00:19:35.216 "impl_name": "ssl", 00:19:35.216 "recv_buf_size": 4096, 00:19:35.216 "send_buf_size": 4096, 00:19:35.216 "enable_recv_pipe": true, 00:19:35.216 "enable_quickack": false, 00:19:35.216 "enable_placement_id": 0, 00:19:35.216 "enable_zerocopy_send_server": true, 00:19:35.216 "enable_zerocopy_send_client": false, 00:19:35.216 "zerocopy_threshold": 0, 00:19:35.216 "tls_version": 0, 00:19:35.216 "enable_ktls": false 00:19:35.216 } 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "method": "sock_impl_set_options", 00:19:35.216 "params": { 00:19:35.216 "impl_name": "posix", 00:19:35.216 "recv_buf_size": 2097152, 00:19:35.216 "send_buf_size": 2097152, 00:19:35.216 "enable_recv_pipe": true, 00:19:35.216 "enable_quickack": false, 00:19:35.216 "enable_placement_id": 0, 00:19:35.216 "enable_zerocopy_send_server": true, 00:19:35.216 "enable_zerocopy_send_client": false, 00:19:35.216 "zerocopy_threshold": 0, 00:19:35.216 "tls_version": 0, 00:19:35.216 "enable_ktls": false 00:19:35.216 } 00:19:35.216 } 00:19:35.216 ] 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "subsystem": "vmd", 00:19:35.216 "config": [] 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "subsystem": "accel", 00:19:35.216 "config": [ 00:19:35.216 { 00:19:35.216 "method": "accel_set_options", 00:19:35.216 "params": { 00:19:35.216 "small_cache_size": 128, 00:19:35.216 "large_cache_size": 16, 00:19:35.216 "task_count": 2048, 00:19:35.216 "sequence_count": 2048, 00:19:35.216 "buf_count": 2048 00:19:35.216 } 00:19:35.216 } 00:19:35.216 ] 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "subsystem": "bdev", 00:19:35.216 "config": [ 00:19:35.216 { 00:19:35.216 "method": "bdev_set_options", 00:19:35.216 "params": { 00:19:35.216 "bdev_io_pool_size": 65535, 00:19:35.216 "bdev_io_cache_size": 256, 00:19:35.216 "bdev_auto_examine": true, 00:19:35.216 "iobuf_small_cache_size": 128, 00:19:35.216 "iobuf_large_cache_size": 16 00:19:35.216 } 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "method": "bdev_raid_set_options", 00:19:35.216 "params": { 00:19:35.216 "process_window_size_kb": 1024, 00:19:35.216 "process_max_bandwidth_mb_sec": 0 00:19:35.216 } 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "method": "bdev_iscsi_set_options", 00:19:35.216 "params": { 00:19:35.216 "timeout_sec": 30 00:19:35.216 } 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "method": "bdev_nvme_set_options", 00:19:35.216 "params": { 00:19:35.216 "action_on_timeout": "none", 00:19:35.216 "timeout_us": 0, 00:19:35.216 "timeout_admin_us": 0, 00:19:35.216 "keep_alive_timeout_ms": 10000, 00:19:35.216 "arbitration_burst": 0, 00:19:35.216 "low_priority_weight": 0, 00:19:35.216 "medium_priority_weight": 0, 00:19:35.217 "high_priority_weight": 0, 00:19:35.217 "nvme_adminq_poll_period_us": 10000, 00:19:35.217 "nvme_ioq_poll_period_us": 0, 00:19:35.217 "io_queue_requests": 0, 00:19:35.217 "delay_cmd_submit": true, 00:19:35.217 "transport_retry_count": 4, 00:19:35.217 "bdev_retry_count": 3, 00:19:35.217 "transport_ack_timeout": 0, 00:19:35.217 "ctrlr_loss_timeout_sec": 0, 00:19:35.217 "reconnect_delay_sec": 0, 00:19:35.217 "fast_io_fail_timeout_sec": 0, 00:19:35.217 "disable_auto_failback": false, 00:19:35.217 "generate_uuids": false, 00:19:35.217 "transport_tos": 0, 00:19:35.217 "nvme_error_stat": false, 00:19:35.217 "rdma_srq_size": 0, 00:19:35.217 "io_path_stat": false, 00:19:35.217 "allow_accel_sequence": false, 00:19:35.217 "rdma_max_cq_size": 0, 00:19:35.217 "rdma_cm_event_timeout_ms": 0, 00:19:35.217 "dhchap_digests": [ 00:19:35.217 "sha256", 00:19:35.217 "sha384", 00:19:35.217 "sha512" 00:19:35.217 ], 00:19:35.217 "dhchap_dhgroups": [ 00:19:35.217 "null", 00:19:35.217 "ffdhe2048", 00:19:35.217 "ffdhe3072", 00:19:35.217 "ffdhe4096", 00:19:35.217 "ffdhe6144", 00:19:35.217 "ffdhe8192" 00:19:35.217 ] 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "bdev_nvme_set_hotplug", 00:19:35.217 "params": { 00:19:35.217 "period_us": 100000, 00:19:35.217 "enable": false 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "bdev_malloc_create", 00:19:35.217 "params": { 00:19:35.217 "name": "malloc0", 00:19:35.217 "num_blocks": 8192, 00:19:35.217 "block_size": 4096, 00:19:35.217 "physical_block_size": 4096, 00:19:35.217 "uuid": "3bffeaf3-c17e-4343-a47c-1b274211f8b0", 00:19:35.217 "optimal_io_boundary": 0, 00:19:35.217 "md_size": 0, 00:19:35.217 "dif_type": 0, 00:19:35.217 "dif_is_head_of_md": false, 00:19:35.217 "dif_pi_format": 0 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "bdev_wait_for_examine" 00:19:35.217 } 00:19:35.217 ] 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "subsystem": "nbd", 00:19:35.217 "config": [] 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "subsystem": "scheduler", 00:19:35.217 "config": [ 00:19:35.217 { 00:19:35.217 "method": "framework_set_scheduler", 00:19:35.217 "params": { 00:19:35.217 "name": "static" 00:19:35.217 } 00:19:35.217 } 00:19:35.217 ] 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "subsystem": "nvmf", 00:19:35.217 "config": [ 00:19:35.217 { 00:19:35.217 "method": "nvmf_set_config", 00:19:35.217 "params": { 00:19:35.217 "discovery_filter": "match_any", 00:19:35.217 "admin_cmd_passthru": { 00:19:35.217 "identify_ctrlr": false 00:19:35.217 }, 00:19:35.217 "dhchap_digests": [ 00:19:35.217 "sha256", 00:19:35.217 "sha384", 00:19:35.217 "sha512" 00:19:35.217 ], 00:19:35.217 "dhchap_dhgroups": [ 00:19:35.217 "null", 00:19:35.217 "ffdhe2048", 00:19:35.217 "ffdhe3072", 00:19:35.217 "ffdhe4096", 00:19:35.217 "ffdhe6144", 00:19:35.217 "ffdhe8192" 00:19:35.217 ] 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "nvmf_set_max_subsystems", 00:19:35.217 "params": { 00:19:35.217 "max_subsystems": 1024 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "nvmf_set_crdt", 00:19:35.217 "params": { 00:19:35.217 "crdt1": 0, 00:19:35.217 "crdt2": 0, 00:19:35.217 "crdt3": 0 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "nvmf_create_transport", 00:19:35.217 "params": { 00:19:35.217 "trtype": "TCP", 00:19:35.217 "max_queue_depth": 128, 00:19:35.217 "max_io_qpairs_per_ctrlr": 127, 00:19:35.217 "in_capsule_data_size": 4096, 00:19:35.217 "max_io_size": 131072, 00:19:35.217 "io_unit_size": 131072, 00:19:35.217 "max_aq_depth": 128, 00:19:35.217 "num_shared_buffers": 511, 00:19:35.217 "buf_cache_size": 4294967295, 00:19:35.217 "dif_insert_or_strip": false, 00:19:35.217 "zcopy": false, 00:19:35.217 "c2h_success": false, 00:19:35.217 "sock_priority": 0, 00:19:35.217 "abort_timeout_sec": 1, 00:19:35.217 "ack_timeout": 0, 00:19:35.217 "data_wr_pool_size": 0 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "nvmf_create_subsystem", 00:19:35.217 "params": { 00:19:35.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.217 "allow_any_host": false, 00:19:35.217 "serial_number": "00000000000000000000", 00:19:35.217 "model_number": "SPDK bdev Controller", 00:19:35.217 "max_namespaces": 32, 00:19:35.217 "min_cntlid": 1, 00:19:35.217 "max_cntlid": 65519, 00:19:35.217 "ana_reporting": false 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "nvmf_subsystem_add_host", 00:19:35.217 "params": { 00:19:35.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.217 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.217 "psk": "key0" 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "nvmf_subsystem_add_ns", 00:19:35.217 "params": { 00:19:35.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.217 "namespace": { 00:19:35.217 "nsid": 1, 00:19:35.217 "bdev_name": "malloc0", 00:19:35.217 "nguid": "3BFFEAF3C17E4343A47C1B274211F8B0", 00:19:35.217 "uuid": "3bffeaf3-c17e-4343-a47c-1b274211f8b0", 00:19:35.217 "no_auto_visible": false 00:19:35.217 } 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "nvmf_subsystem_add_listener", 00:19:35.217 "params": { 00:19:35.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.217 "listen_address": { 00:19:35.217 "trtype": "TCP", 00:19:35.217 "adrfam": "IPv4", 00:19:35.217 "traddr": "10.0.0.2", 00:19:35.217 "trsvcid": "4420" 00:19:35.217 }, 00:19:35.217 "secure_channel": false, 00:19:35.217 "sock_impl": "ssl" 00:19:35.217 } 00:19:35.217 } 00:19:35.217 ] 00:19:35.217 } 00:19:35.217 ] 00:19:35.217 }' 00:19:35.217 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.217 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82022 00:19:35.217 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:35.217 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82022 00:19:35.217 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82022 ']' 00:19:35.217 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.217 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.217 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.217 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.217 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.217 [2024-12-10 04:06:34.344898] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:35.217 [2024-12-10 04:06:34.344944] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.217 [2024-12-10 04:06:34.421695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.217 [2024-12-10 04:06:34.460630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.217 [2024-12-10 04:06:34.460663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.217 [2024-12-10 04:06:34.460669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.217 [2024-12-10 04:06:34.460675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.217 [2024-12-10 04:06:34.460680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.217 [2024-12-10 04:06:34.461210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.476 [2024-12-10 04:06:34.673981] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.476 [2024-12-10 04:06:34.706019] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.476 [2024-12-10 04:06:34.706238] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.042 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.042 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.042 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=82146 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 82146 /var/tmp/bdevperf.sock 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82146 ']' 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:36.043 "subsystems": [ 00:19:36.043 { 00:19:36.043 "subsystem": "keyring", 00:19:36.043 "config": [ 00:19:36.043 { 00:19:36.043 "method": "keyring_file_add_key", 00:19:36.043 "params": { 00:19:36.043 "name": "key0", 00:19:36.043 "path": "/tmp/tmp.29uvJDZwgq" 00:19:36.043 } 00:19:36.043 } 00:19:36.043 ] 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "subsystem": "iobuf", 00:19:36.043 "config": [ 00:19:36.043 { 00:19:36.043 "method": "iobuf_set_options", 00:19:36.043 "params": { 00:19:36.043 "small_pool_count": 8192, 00:19:36.043 "large_pool_count": 1024, 00:19:36.043 "small_bufsize": 8192, 00:19:36.043 "large_bufsize": 135168, 00:19:36.043 "enable_numa": false 00:19:36.043 } 00:19:36.043 } 00:19:36.043 ] 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "subsystem": "sock", 00:19:36.043 "config": [ 00:19:36.043 { 00:19:36.043 "method": "sock_set_default_impl", 00:19:36.043 "params": { 00:19:36.043 "impl_name": "posix" 00:19:36.043 } 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "method": "sock_impl_set_options", 00:19:36.043 "params": { 00:19:36.043 "impl_name": "ssl", 00:19:36.043 "recv_buf_size": 4096, 00:19:36.043 "send_buf_size": 4096, 00:19:36.043 "enable_recv_pipe": true, 00:19:36.043 "enable_quickack": false, 00:19:36.043 "enable_placement_id": 0, 00:19:36.043 "enable_zerocopy_send_server": true, 00:19:36.043 "enable_zerocopy_send_client": false, 00:19:36.043 "zerocopy_threshold": 0, 00:19:36.043 "tls_version": 0, 00:19:36.043 "enable_ktls": false 00:19:36.043 } 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "method": "sock_impl_set_options", 00:19:36.043 "params": { 00:19:36.043 "impl_name": "posix", 00:19:36.043 "recv_buf_size": 2097152, 00:19:36.043 "send_buf_size": 2097152, 00:19:36.043 "enable_recv_pipe": true, 00:19:36.043 "enable_quickack": false, 00:19:36.043 "enable_placement_id": 0, 00:19:36.043 "enable_zerocopy_send_server": true, 00:19:36.043 "enable_zerocopy_send_client": false, 00:19:36.043 "zerocopy_threshold": 0, 00:19:36.043 "tls_version": 0, 00:19:36.043 "enable_ktls": false 00:19:36.043 } 00:19:36.043 } 00:19:36.043 ] 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "subsystem": "vmd", 00:19:36.043 "config": [] 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "subsystem": "accel", 00:19:36.043 "config": [ 00:19:36.043 { 00:19:36.043 "method": "accel_set_options", 00:19:36.043 "params": { 00:19:36.043 "small_cache_size": 128, 00:19:36.043 "large_cache_size": 16, 00:19:36.043 "task_count": 2048, 00:19:36.043 "sequence_count": 2048, 00:19:36.043 "buf_count": 2048 00:19:36.043 } 00:19:36.043 } 00:19:36.043 ] 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "subsystem": "bdev", 00:19:36.043 "config": [ 00:19:36.043 { 00:19:36.043 "method": "bdev_set_options", 00:19:36.043 "params": { 00:19:36.043 "bdev_io_pool_size": 65535, 00:19:36.043 "bdev_io_cache_size": 256, 00:19:36.043 "bdev_auto_examine": true, 00:19:36.043 "iobuf_small_cache_size": 128, 00:19:36.043 "iobuf_large_cache_size": 16 00:19:36.043 } 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "method": "bdev_raid_set_options", 00:19:36.043 "params": { 00:19:36.043 "process_window_size_kb": 1024, 00:19:36.043 "process_max_bandwidth_mb_sec": 0 00:19:36.043 } 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "method": "bdev_iscsi_set_options", 00:19:36.043 "params": { 00:19:36.043 "timeout_sec": 30 00:19:36.043 } 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "method": "bdev_nvme_set_options", 00:19:36.043 "params": { 00:19:36.043 "action_on_timeout": "none", 00:19:36.043 "timeout_us": 0, 00:19:36.043 "timeout_admin_us": 0, 00:19:36.043 "keep_alive_timeout_ms": 10000, 00:19:36.043 "arbitration_burst": 0, 00:19:36.043 "low_priority_weight": 0, 00:19:36.043 "medium_priority_weight": 0, 00:19:36.043 "high_priority_weight": 0, 00:19:36.043 "nvme_adminq_poll_period_us": 10000, 00:19:36.043 "nvme_ioq_poll_period_us": 0, 00:19:36.043 "io_queue_requests": 512, 00:19:36.043 "delay_cmd_submit": true, 00:19:36.043 "transport_retry_count": 4, 00:19:36.043 "bdev_retry_count": 3, 00:19:36.043 "transport_ack_timeout": 0, 00:19:36.043 "ctrlr_loss_timeout_sec": 0, 00:19:36.043 "reconnect_delay_sec": 0, 00:19:36.043 "fast_io_fail_timeout_sec": 0, 00:19:36.043 "disable_auto_failback": false, 00:19:36.043 "generate_uuids": false, 00:19:36.043 "transport_tos": 0, 00:19:36.043 "nvme_error_stat": false, 00:19:36.043 "rdma_srq_size": 0, 00:19:36.043 "io_path_stat": false, 00:19:36.043 "allow_accel_sequence": false, 00:19:36.043 "rdma_max_cq_size": 0, 00:19:36.043 "rdma_cm_event_timeout_ms": 0, 00:19:36.043 "dhchap_digests": [ 00:19:36.043 "sha256", 00:19:36.043 "sha384", 00:19:36.043 "sha512" 00:19:36.043 ], 00:19:36.043 "dhchap_dhgroups": [ 00:19:36.043 "null", 00:19:36.043 "ffdhe2048", 00:19:36.043 "ffdhe3072", 00:19:36.043 "ffdhe4096", 00:19:36.043 "ffdhe6144", 00:19:36.043 "ffdhe8192" 00:19:36.043 ] 00:19:36.043 } 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "method": "bdev_nvme_attach_controller", 00:19:36.043 "params": { 00:19:36.043 "name": "nvme0", 00:19:36.043 "trtype": "TCP", 00:19:36.043 "adrfam": "IPv4", 00:19:36.043 "traddr": "10.0.0.2", 00:19:36.043 "trsvcid": "4420", 00:19:36.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.043 "prchk_reftag": false, 00:19:36.043 "prchk_guard": false, 00:19:36.043 "ctrlr_loss_timeout_sec": 0, 00:19:36.043 "reconnect_delay_sec": 0, 00:19:36.043 "fast_io_fail_timeout_sec": 0, 00:19:36.043 "psk": "key0", 00:19:36.043 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.043 "hdgst": false, 00:19:36.043 "ddgst": false, 00:19:36.043 "multipath": "multipath" 00:19:36.043 } 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "method": "bdev_nvme_set_hotplug", 00:19:36.043 "params": { 00:19:36.043 "period_us": 100000, 00:19:36.043 "enable": false 00:19:36.043 } 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "method": "bdev_enable_histogram", 00:19:36.043 "params": { 00:19:36.043 "name": "nvme0n1", 00:19:36.043 "enable": true 00:19:36.043 } 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "method": "bdev_wait_for_examine" 00:19:36.043 } 00:19:36.043 ] 00:19:36.043 }, 00:19:36.043 { 00:19:36.043 "subsystem": "nbd", 00:19:36.043 "config": [] 00:19:36.043 } 00:19:36.043 ] 00:19:36.043 }' 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.043 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.043 [2024-12-10 04:06:35.259106] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:36.043 [2024-12-10 04:06:35.259171] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82146 ] 00:19:36.301 [2024-12-10 04:06:35.332465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.301 [2024-12-10 04:06:35.372850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.301 [2024-12-10 04:06:35.526336] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.867 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.867 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.867 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:36.867 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:37.125 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.125 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:37.125 Running I/O for 1 seconds... 00:19:38.500 5209.00 IOPS, 20.35 MiB/s 00:19:38.500 Latency(us) 00:19:38.500 [2024-12-10T03:06:37.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.500 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:38.500 Verification LBA range: start 0x0 length 0x2000 00:19:38.500 nvme0n1 : 1.03 5166.78 20.18 0.00 0.00 24465.98 4743.56 32455.92 00:19:38.500 [2024-12-10T03:06:37.786Z] =================================================================================================================== 00:19:38.500 [2024-12-10T03:06:37.786Z] Total : 5166.78 20.18 0.00 0.00 24465.98 4743.56 32455.92 00:19:38.500 { 00:19:38.500 "results": [ 00:19:38.500 { 00:19:38.500 "job": "nvme0n1", 00:19:38.500 "core_mask": "0x2", 00:19:38.500 "workload": "verify", 00:19:38.500 "status": "finished", 00:19:38.500 "verify_range": { 00:19:38.500 "start": 0, 00:19:38.500 "length": 8192 00:19:38.500 }, 00:19:38.500 "queue_depth": 128, 00:19:38.500 "io_size": 4096, 00:19:38.500 "runtime": 1.033139, 00:19:38.500 "iops": 5166.778139243606, 00:19:38.500 "mibps": 20.182727106420337, 00:19:38.500 "io_failed": 0, 00:19:38.500 "io_timeout": 0, 00:19:38.500 "avg_latency_us": 24465.97823475887, 00:19:38.500 "min_latency_us": 4743.558095238095, 00:19:38.500 "max_latency_us": 32455.92380952381 00:19:38.500 } 00:19:38.500 ], 00:19:38.500 "core_count": 1 00:19:38.500 } 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:38.500 nvmf_trace.0 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 82146 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82146 ']' 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82146 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82146 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:38.500 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:38.501 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82146' 00:19:38.501 killing process with pid 82146 00:19:38.501 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82146 00:19:38.501 Received shutdown signal, test time was about 1.000000 seconds 00:19:38.501 00:19:38.501 Latency(us) 00:19:38.501 [2024-12-10T03:06:37.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.501 [2024-12-10T03:06:37.787Z] =================================================================================================================== 00:19:38.501 [2024-12-10T03:06:37.787Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.501 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82146 00:19:38.501 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:38.501 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.501 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:38.501 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.501 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:38.501 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.501 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.501 rmmod nvme_tcp 00:19:38.501 rmmod nvme_fabrics 00:19:38.501 rmmod nvme_keyring 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 82022 ']' 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 82022 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82022 ']' 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82022 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82022 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82022' 00:19:38.760 killing process with pid 82022 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82022 00:19:38.760 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82022 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.760 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.409 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:41.409 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZMi46lEd1u /tmp/tmp.5p41rZTTMi /tmp/tmp.29uvJDZwgq 00:19:41.409 00:19:41.409 real 1m19.138s 00:19:41.409 user 2m0.083s 00:19:41.409 sys 0m31.470s 00:19:41.409 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.409 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.409 ************************************ 00:19:41.409 END TEST nvmf_tls 00:19:41.409 ************************************ 00:19:41.409 04:06:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:41.409 04:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:41.409 04:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.409 04:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:41.409 ************************************ 00:19:41.409 START TEST nvmf_fips 00:19:41.409 ************************************ 00:19:41.409 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:41.409 * Looking for test storage... 00:19:41.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:41.409 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:41.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.410 --rc genhtml_branch_coverage=1 00:19:41.410 --rc genhtml_function_coverage=1 00:19:41.410 --rc genhtml_legend=1 00:19:41.410 --rc geninfo_all_blocks=1 00:19:41.410 --rc geninfo_unexecuted_blocks=1 00:19:41.410 00:19:41.410 ' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:41.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.410 --rc genhtml_branch_coverage=1 00:19:41.410 --rc genhtml_function_coverage=1 00:19:41.410 --rc genhtml_legend=1 00:19:41.410 --rc geninfo_all_blocks=1 00:19:41.410 --rc geninfo_unexecuted_blocks=1 00:19:41.410 00:19:41.410 ' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:41.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.410 --rc genhtml_branch_coverage=1 00:19:41.410 --rc genhtml_function_coverage=1 00:19:41.410 --rc genhtml_legend=1 00:19:41.410 --rc geninfo_all_blocks=1 00:19:41.410 --rc geninfo_unexecuted_blocks=1 00:19:41.410 00:19:41.410 ' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:41.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.410 --rc genhtml_branch_coverage=1 00:19:41.410 --rc genhtml_function_coverage=1 00:19:41.410 --rc genhtml_legend=1 00:19:41.410 --rc geninfo_all_blocks=1 00:19:41.410 --rc geninfo_unexecuted_blocks=1 00:19:41.410 00:19:41.410 ' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:41.410 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:41.411 Error setting digest 00:19:41.411 4022D2362D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:41.411 4022D2362D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:41.411 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:47.981 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:47.981 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:47.981 Found net devices under 0000:af:00.0: cvl_0_0 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:47.981 Found net devices under 0000:af:00.1: cvl_0_1 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:47.981 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:47.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:19:47.982 00:19:47.982 --- 10.0.0.2 ping statistics --- 00:19:47.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.982 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:19:47.982 00:19:47.982 --- 10.0.0.1 ping statistics --- 00:19:47.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.982 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=86042 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 86042 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 86042 ']' 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.982 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.982 [2024-12-10 04:06:46.497899] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:47.982 [2024-12-10 04:06:46.497950] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.982 [2024-12-10 04:06:46.575180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.982 [2024-12-10 04:06:46.614909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.982 [2024-12-10 04:06:46.614945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.982 [2024-12-10 04:06:46.614953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.982 [2024-12-10 04:06:46.614959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.982 [2024-12-10 04:06:46.614964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.982 [2024-12-10 04:06:46.615469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.WNR 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.WNR 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.WNR 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.WNR 00:19:48.241 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:48.500 [2024-12-10 04:06:47.525514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.500 [2024-12-10 04:06:47.541516] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:48.500 [2024-12-10 04:06:47.541719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.500 malloc0 00:19:48.500 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:48.500 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=86257 00:19:48.500 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.500 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 86257 /var/tmp/bdevperf.sock 00:19:48.500 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 86257 ']' 00:19:48.500 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.500 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.501 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.501 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.501 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:48.501 [2024-12-10 04:06:47.669695] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:19:48.501 [2024-12-10 04:06:47.669744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86257 ] 00:19:48.501 [2024-12-10 04:06:47.744726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.760 [2024-12-10 04:06:47.785522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.328 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.328 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:49.328 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.WNR 00:19:49.586 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.586 [2024-12-10 04:06:48.850869] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.845 TLSTESTn1 00:19:49.845 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:49.845 Running I/O for 10 seconds... 00:19:52.156 5502.00 IOPS, 21.49 MiB/s [2024-12-10T03:06:52.377Z] 5551.00 IOPS, 21.68 MiB/s [2024-12-10T03:06:53.313Z] 5611.00 IOPS, 21.92 MiB/s [2024-12-10T03:06:54.249Z] 5578.25 IOPS, 21.79 MiB/s [2024-12-10T03:06:55.190Z] 5566.60 IOPS, 21.74 MiB/s [2024-12-10T03:06:56.129Z] 5558.83 IOPS, 21.71 MiB/s [2024-12-10T03:06:57.066Z] 5556.71 IOPS, 21.71 MiB/s [2024-12-10T03:06:58.442Z] 5570.50 IOPS, 21.76 MiB/s [2024-12-10T03:06:59.379Z] 5543.44 IOPS, 21.65 MiB/s [2024-12-10T03:06:59.379Z] 5553.70 IOPS, 21.69 MiB/s 00:20:00.093 Latency(us) 00:20:00.093 [2024-12-10T03:06:59.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.093 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:00.093 Verification LBA range: start 0x0 length 0x2000 00:20:00.093 TLSTESTn1 : 10.01 5558.44 21.71 0.00 0.00 22995.04 5773.41 23717.79 00:20:00.093 [2024-12-10T03:06:59.379Z] =================================================================================================================== 00:20:00.093 [2024-12-10T03:06:59.379Z] Total : 5558.44 21.71 0.00 0.00 22995.04 5773.41 23717.79 00:20:00.093 { 00:20:00.093 "results": [ 00:20:00.093 { 00:20:00.093 "job": "TLSTESTn1", 00:20:00.093 "core_mask": "0x4", 00:20:00.093 "workload": "verify", 00:20:00.093 "status": "finished", 00:20:00.093 "verify_range": { 00:20:00.093 "start": 0, 00:20:00.093 "length": 8192 00:20:00.093 }, 00:20:00.093 "queue_depth": 128, 00:20:00.093 "io_size": 4096, 00:20:00.093 "runtime": 10.014134, 00:20:00.093 "iops": 5558.443695680525, 00:20:00.093 "mibps": 21.71267068625205, 00:20:00.093 "io_failed": 0, 00:20:00.093 "io_timeout": 0, 00:20:00.093 "avg_latency_us": 22995.040285117153, 00:20:00.093 "min_latency_us": 5773.409523809524, 00:20:00.093 "max_latency_us": 23717.790476190476 00:20:00.093 } 00:20:00.093 ], 00:20:00.093 "core_count": 1 00:20:00.093 } 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:00.093 nvmf_trace.0 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 86257 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 86257 ']' 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 86257 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86257 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86257' 00:20:00.093 killing process with pid 86257 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 86257 00:20:00.093 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.093 00:20:00.093 Latency(us) 00:20:00.093 [2024-12-10T03:06:59.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.093 [2024-12-10T03:06:59.379Z] =================================================================================================================== 00:20:00.093 [2024-12-10T03:06:59.379Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.093 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 86257 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:00.352 rmmod nvme_tcp 00:20:00.352 rmmod nvme_fabrics 00:20:00.352 rmmod nvme_keyring 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 86042 ']' 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 86042 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 86042 ']' 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 86042 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86042 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:00.352 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86042' 00:20:00.352 killing process with pid 86042 00:20:00.353 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 86042 00:20:00.353 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 86042 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.612 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.517 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:02.517 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.WNR 00:20:02.517 00:20:02.517 real 0m21.588s 00:20:02.517 user 0m23.289s 00:20:02.517 sys 0m9.665s 00:20:02.517 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.517 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:02.517 ************************************ 00:20:02.517 END TEST nvmf_fips 00:20:02.517 ************************************ 00:20:02.517 04:07:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:02.517 04:07:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:02.517 04:07:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.517 04:07:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:02.777 ************************************ 00:20:02.777 START TEST nvmf_control_msg_list 00:20:02.777 ************************************ 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:02.777 * Looking for test storage... 00:20:02.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.777 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:02.777 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:02.777 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:02.777 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:02.777 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:02.777 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.777 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:02.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.777 --rc genhtml_branch_coverage=1 00:20:02.777 --rc genhtml_function_coverage=1 00:20:02.777 --rc genhtml_legend=1 00:20:02.777 --rc geninfo_all_blocks=1 00:20:02.777 --rc geninfo_unexecuted_blocks=1 00:20:02.777 00:20:02.777 ' 00:20:02.777 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:02.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.777 --rc genhtml_branch_coverage=1 00:20:02.777 --rc genhtml_function_coverage=1 00:20:02.777 --rc genhtml_legend=1 00:20:02.777 --rc geninfo_all_blocks=1 00:20:02.777 --rc geninfo_unexecuted_blocks=1 00:20:02.777 00:20:02.777 ' 00:20:02.777 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:02.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.777 --rc genhtml_branch_coverage=1 00:20:02.777 --rc genhtml_function_coverage=1 00:20:02.777 --rc genhtml_legend=1 00:20:02.777 --rc geninfo_all_blocks=1 00:20:02.777 --rc geninfo_unexecuted_blocks=1 00:20:02.777 00:20:02.777 ' 00:20:02.777 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:02.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.777 --rc genhtml_branch_coverage=1 00:20:02.778 --rc genhtml_function_coverage=1 00:20:02.778 --rc genhtml_legend=1 00:20:02.778 --rc geninfo_all_blocks=1 00:20:02.778 --rc geninfo_unexecuted_blocks=1 00:20:02.778 00:20:02.778 ' 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:02.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:02.778 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:09.345 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:09.345 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:09.345 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:09.346 Found net devices under 0000:af:00.0: cvl_0_0 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:09.346 Found net devices under 0000:af:00.1: cvl_0_1 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:09.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:20:09.346 00:20:09.346 --- 10.0.0.2 ping statistics --- 00:20:09.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.346 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:20:09.346 00:20:09.346 --- 10.0.0.1 ping statistics --- 00:20:09.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.346 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=91731 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 91731 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 91731 ']' 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.346 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.347 [2024-12-10 04:07:08.025118] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:09.347 [2024-12-10 04:07:08.025159] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.347 [2024-12-10 04:07:08.100117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.347 [2024-12-10 04:07:08.139681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.347 [2024-12-10 04:07:08.139719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.347 [2024-12-10 04:07:08.139726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.347 [2024-12-10 04:07:08.139733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.347 [2024-12-10 04:07:08.139739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.347 [2024-12-10 04:07:08.140222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.347 [2024-12-10 04:07:08.284306] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.347 Malloc0 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.347 [2024-12-10 04:07:08.332671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=91751 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=91752 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=91753 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 91751 00:20:09.347 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:09.347 [2024-12-10 04:07:08.407058] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:09.347 [2024-12-10 04:07:08.417053] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:09.347 [2024-12-10 04:07:08.427097] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:10.283 Initializing NVMe Controllers 00:20:10.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:10.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:10.283 Initialization complete. Launching workers. 00:20:10.283 ======================================================== 00:20:10.283 Latency(us) 00:20:10.283 Device Information : IOPS MiB/s Average min max 00:20:10.283 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40909.65 40767.44 41091.04 00:20:10.283 ======================================================== 00:20:10.283 Total : 25.00 0.10 40909.65 40767.44 41091.04 00:20:10.283 00:20:10.283 Initializing NVMe Controllers 00:20:10.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:10.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:10.283 Initialization complete. Launching workers. 00:20:10.283 ======================================================== 00:20:10.283 Latency(us) 00:20:10.283 Device Information : IOPS MiB/s Average min max 00:20:10.283 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 7505.00 29.32 132.91 121.34 360.78 00:20:10.283 ======================================================== 00:20:10.283 Total : 7505.00 29.32 132.91 121.34 360.78 00:20:10.283 00:20:10.283 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 91752 00:20:10.542 Initializing NVMe Controllers 00:20:10.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:10.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:10.542 Initialization complete. Launching workers. 00:20:10.542 ======================================================== 00:20:10.542 Latency(us) 00:20:10.542 Device Information : IOPS MiB/s Average min max 00:20:10.542 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40895.39 40759.54 41010.02 00:20:10.542 ======================================================== 00:20:10.542 Total : 25.00 0.10 40895.39 40759.54 41010.02 00:20:10.542 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 91753 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.542 rmmod nvme_tcp 00:20:10.542 rmmod nvme_fabrics 00:20:10.542 rmmod nvme_keyring 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 91731 ']' 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 91731 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 91731 ']' 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 91731 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91731 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91731' 00:20:10.542 killing process with pid 91731 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 91731 00:20:10.542 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 91731 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.800 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:13.336 00:20:13.336 real 0m10.201s 00:20:13.336 user 0m6.766s 00:20:13.336 sys 0m5.306s 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:13.336 ************************************ 00:20:13.336 END TEST nvmf_control_msg_list 00:20:13.336 ************************************ 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:13.336 ************************************ 00:20:13.336 START TEST nvmf_wait_for_buf 00:20:13.336 ************************************ 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:13.336 * Looking for test storage... 00:20:13.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:13.336 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.337 --rc genhtml_branch_coverage=1 00:20:13.337 --rc genhtml_function_coverage=1 00:20:13.337 --rc genhtml_legend=1 00:20:13.337 --rc geninfo_all_blocks=1 00:20:13.337 --rc geninfo_unexecuted_blocks=1 00:20:13.337 00:20:13.337 ' 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.337 --rc genhtml_branch_coverage=1 00:20:13.337 --rc genhtml_function_coverage=1 00:20:13.337 --rc genhtml_legend=1 00:20:13.337 --rc geninfo_all_blocks=1 00:20:13.337 --rc geninfo_unexecuted_blocks=1 00:20:13.337 00:20:13.337 ' 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.337 --rc genhtml_branch_coverage=1 00:20:13.337 --rc genhtml_function_coverage=1 00:20:13.337 --rc genhtml_legend=1 00:20:13.337 --rc geninfo_all_blocks=1 00:20:13.337 --rc geninfo_unexecuted_blocks=1 00:20:13.337 00:20:13.337 ' 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.337 --rc genhtml_branch_coverage=1 00:20:13.337 --rc genhtml_function_coverage=1 00:20:13.337 --rc genhtml_legend=1 00:20:13.337 --rc geninfo_all_blocks=1 00:20:13.337 --rc geninfo_unexecuted_blocks=1 00:20:13.337 00:20:13.337 ' 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:13.337 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:19.911 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:19.911 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:19.911 Found net devices under 0000:af:00.0: cvl_0_0 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:19.911 Found net devices under 0000:af:00.1: cvl_0_1 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.911 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.911 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.911 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.911 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:19.911 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.911 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.911 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.911 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:19.911 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:19.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:20:19.911 00:20:19.911 --- 10.0.0.2 ping statistics --- 00:20:19.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.911 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:20:19.911 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:20:19.912 00:20:19.912 --- 10.0.0.1 ping statistics --- 00:20:19.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.912 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=95446 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 95446 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 95446 ']' 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 [2024-12-10 04:07:18.264058] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:19.912 [2024-12-10 04:07:18.264106] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.912 [2024-12-10 04:07:18.340962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.912 [2024-12-10 04:07:18.380616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.912 [2024-12-10 04:07:18.380651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.912 [2024-12-10 04:07:18.380661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.912 [2024-12-10 04:07:18.380666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.912 [2024-12-10 04:07:18.380671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.912 [2024-12-10 04:07:18.381144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 Malloc0 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 [2024-12-10 04:07:18.562986] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.912 [2024-12-10 04:07:18.591184] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.912 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:19.912 [2024-12-10 04:07:18.675019] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:20.937 Initializing NVMe Controllers 00:20:20.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:20.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:20.937 Initialization complete. Launching workers. 00:20:20.937 ======================================================== 00:20:20.937 Latency(us) 00:20:20.937 Device Information : IOPS MiB/s Average min max 00:20:20.937 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32240.07 7307.30 63844.16 00:20:20.937 ======================================================== 00:20:20.937 Total : 129.00 16.12 32240.07 7307.30 63844.16 00:20:20.937 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.937 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.937 rmmod nvme_tcp 00:20:20.937 rmmod nvme_fabrics 00:20:21.195 rmmod nvme_keyring 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 95446 ']' 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 95446 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 95446 ']' 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 95446 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95446 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95446' 00:20:21.195 killing process with pid 95446 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 95446 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 95446 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.195 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.727 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:23.727 00:20:23.727 real 0m10.429s 00:20:23.727 user 0m3.980s 00:20:23.727 sys 0m4.891s 00:20:23.727 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.727 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:23.727 ************************************ 00:20:23.727 END TEST nvmf_wait_for_buf 00:20:23.727 ************************************ 00:20:23.727 04:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:23.727 04:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:23.727 04:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:23.727 04:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:23.727 04:07:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:23.727 04:07:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:28.997 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:28.998 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:28.998 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:28.998 Found net devices under 0000:af:00.0: cvl_0_0 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:28.998 Found net devices under 0000:af:00.1: cvl_0_1 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.998 ************************************ 00:20:28.998 START TEST nvmf_perf_adq 00:20:28.998 ************************************ 00:20:28.998 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:29.257 * Looking for test storage... 00:20:29.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:29.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.257 --rc genhtml_branch_coverage=1 00:20:29.257 --rc genhtml_function_coverage=1 00:20:29.257 --rc genhtml_legend=1 00:20:29.257 --rc geninfo_all_blocks=1 00:20:29.257 --rc geninfo_unexecuted_blocks=1 00:20:29.257 00:20:29.257 ' 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:29.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.257 --rc genhtml_branch_coverage=1 00:20:29.257 --rc genhtml_function_coverage=1 00:20:29.257 --rc genhtml_legend=1 00:20:29.257 --rc geninfo_all_blocks=1 00:20:29.257 --rc geninfo_unexecuted_blocks=1 00:20:29.257 00:20:29.257 ' 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:29.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.257 --rc genhtml_branch_coverage=1 00:20:29.257 --rc genhtml_function_coverage=1 00:20:29.257 --rc genhtml_legend=1 00:20:29.257 --rc geninfo_all_blocks=1 00:20:29.257 --rc geninfo_unexecuted_blocks=1 00:20:29.257 00:20:29.257 ' 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:29.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.257 --rc genhtml_branch_coverage=1 00:20:29.257 --rc genhtml_function_coverage=1 00:20:29.257 --rc genhtml_legend=1 00:20:29.257 --rc geninfo_all_blocks=1 00:20:29.257 --rc geninfo_unexecuted_blocks=1 00:20:29.257 00:20:29.257 ' 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:29.257 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:29.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:29.258 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:35.819 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:35.819 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:35.819 Found net devices under 0000:af:00.0: cvl_0_0 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:35.819 Found net devices under 0000:af:00.1: cvl_0_1 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:35.819 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:35.819 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:38.353 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:43.624 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:43.624 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:43.624 Found net devices under 0000:af:00.0: cvl_0_0 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.624 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:43.625 Found net devices under 0000:af:00.1: cvl_0_1 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.835 ms 00:20:43.625 00:20:43.625 --- 10.0.0.2 ping statistics --- 00:20:43.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.625 rtt min/avg/max/mdev = 0.835/0.835/0.835/0.000 ms 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:20:43.625 00:20:43.625 --- 10.0.0.1 ping statistics --- 00:20:43.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.625 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=103763 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 103763 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 103763 ']' 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.625 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.625 [2024-12-10 04:07:42.889213] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:20:43.625 [2024-12-10 04:07:42.889267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.883 [2024-12-10 04:07:42.969391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.883 [2024-12-10 04:07:43.011445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.883 [2024-12-10 04:07:43.011483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.883 [2024-12-10 04:07:43.011490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.883 [2024-12-10 04:07:43.011496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.883 [2024-12-10 04:07:43.011501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.883 [2024-12-10 04:07:43.012947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.883 [2024-12-10 04:07:43.013056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.883 [2024-12-10 04:07:43.013172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.883 [2024-12-10 04:07:43.013185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.449 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.449 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:44.449 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.449 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.449 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.708 [2024-12-10 04:07:43.903963] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.708 Malloc1 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.708 [2024-12-10 04:07:43.959928] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=104011 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:44.708 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:47.237 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:47.237 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.237 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.237 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.237 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:47.237 "tick_rate": 2100000000, 00:20:47.237 "poll_groups": [ 00:20:47.237 { 00:20:47.237 "name": "nvmf_tgt_poll_group_000", 00:20:47.237 "admin_qpairs": 1, 00:20:47.237 "io_qpairs": 1, 00:20:47.237 "current_admin_qpairs": 1, 00:20:47.237 "current_io_qpairs": 1, 00:20:47.237 "pending_bdev_io": 0, 00:20:47.237 "completed_nvme_io": 19206, 00:20:47.237 "transports": [ 00:20:47.237 { 00:20:47.237 "trtype": "TCP" 00:20:47.237 } 00:20:47.237 ] 00:20:47.237 }, 00:20:47.237 { 00:20:47.237 "name": "nvmf_tgt_poll_group_001", 00:20:47.237 "admin_qpairs": 0, 00:20:47.237 "io_qpairs": 1, 00:20:47.237 "current_admin_qpairs": 0, 00:20:47.237 "current_io_qpairs": 1, 00:20:47.237 "pending_bdev_io": 0, 00:20:47.237 "completed_nvme_io": 19210, 00:20:47.237 "transports": [ 00:20:47.237 { 00:20:47.237 "trtype": "TCP" 00:20:47.237 } 00:20:47.237 ] 00:20:47.237 }, 00:20:47.237 { 00:20:47.237 "name": "nvmf_tgt_poll_group_002", 00:20:47.237 "admin_qpairs": 0, 00:20:47.237 "io_qpairs": 1, 00:20:47.237 "current_admin_qpairs": 0, 00:20:47.237 "current_io_qpairs": 1, 00:20:47.237 "pending_bdev_io": 0, 00:20:47.237 "completed_nvme_io": 19675, 00:20:47.237 "transports": [ 00:20:47.237 { 00:20:47.237 "trtype": "TCP" 00:20:47.237 } 00:20:47.237 ] 00:20:47.237 }, 00:20:47.237 { 00:20:47.237 "name": "nvmf_tgt_poll_group_003", 00:20:47.237 "admin_qpairs": 0, 00:20:47.237 "io_qpairs": 1, 00:20:47.237 "current_admin_qpairs": 0, 00:20:47.237 "current_io_qpairs": 1, 00:20:47.237 "pending_bdev_io": 0, 00:20:47.237 "completed_nvme_io": 19069, 00:20:47.237 "transports": [ 00:20:47.237 { 00:20:47.237 "trtype": "TCP" 00:20:47.237 } 00:20:47.237 ] 00:20:47.237 } 00:20:47.237 ] 00:20:47.237 }' 00:20:47.237 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:47.237 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:47.237 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:47.237 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:47.237 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 104011 00:20:55.347 Initializing NVMe Controllers 00:20:55.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:55.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:55.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:55.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:55.347 Initialization complete. Launching workers. 00:20:55.347 ======================================================== 00:20:55.347 Latency(us) 00:20:55.347 Device Information : IOPS MiB/s Average min max 00:20:55.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10152.40 39.66 6305.27 1579.20 10355.15 00:20:55.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10311.00 40.28 6207.11 2255.73 10579.28 00:20:55.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10463.20 40.87 6117.46 1844.53 10341.64 00:20:55.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10337.30 40.38 6192.06 2106.66 10830.92 00:20:55.347 ======================================================== 00:20:55.347 Total : 41263.90 161.19 6204.76 1579.20 10830.92 00:20:55.347 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.347 rmmod nvme_tcp 00:20:55.347 rmmod nvme_fabrics 00:20:55.347 rmmod nvme_keyring 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 103763 ']' 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 103763 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 103763 ']' 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 103763 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103763 00:20:55.347 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103763' 00:20:55.348 killing process with pid 103763 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 103763 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 103763 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.348 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.249 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:57.249 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:57.249 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:57.249 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:58.626 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:01.157 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:06.595 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:06.595 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.595 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:06.596 Found net devices under 0000:af:00.0: cvl_0_0 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:06.596 Found net devices under 0000:af:00.1: cvl_0_1 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:06.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:21:06.596 00:21:06.596 --- 10.0.0.2 ping statistics --- 00:21:06.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.596 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:21:06.596 00:21:06.596 --- 10.0.0.1 ping statistics --- 00:21:06.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.596 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:06.596 net.core.busy_poll = 1 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:06.596 net.core.busy_read = 1 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:06.596 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=108037 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 108037 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 108037 ']' 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.855 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.855 [2024-12-10 04:08:05.966983] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:06.855 [2024-12-10 04:08:05.967030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.855 [2024-12-10 04:08:06.051600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.855 [2024-12-10 04:08:06.093018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.856 [2024-12-10 04:08:06.093055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.856 [2024-12-10 04:08:06.093062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.856 [2024-12-10 04:08:06.093069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.856 [2024-12-10 04:08:06.093074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.856 [2024-12-10 04:08:06.094532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.856 [2024-12-10 04:08:06.094561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.856 [2024-12-10 04:08:06.094669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.856 [2024-12-10 04:08:06.094669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:07.791 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.792 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.792 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.792 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:07.792 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.792 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.792 [2024-12-10 04:08:06.982606] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.792 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.792 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:07.792 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.792 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.792 Malloc1 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.792 [2024-12-10 04:08:07.056094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=108249 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:07.792 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:10.324 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:10.324 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.324 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.324 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.324 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:10.324 "tick_rate": 2100000000, 00:21:10.324 "poll_groups": [ 00:21:10.324 { 00:21:10.324 "name": "nvmf_tgt_poll_group_000", 00:21:10.324 "admin_qpairs": 1, 00:21:10.324 "io_qpairs": 1, 00:21:10.324 "current_admin_qpairs": 1, 00:21:10.324 "current_io_qpairs": 1, 00:21:10.324 "pending_bdev_io": 0, 00:21:10.324 "completed_nvme_io": 28683, 00:21:10.324 "transports": [ 00:21:10.324 { 00:21:10.324 "trtype": "TCP" 00:21:10.324 } 00:21:10.324 ] 00:21:10.324 }, 00:21:10.324 { 00:21:10.324 "name": "nvmf_tgt_poll_group_001", 00:21:10.324 "admin_qpairs": 0, 00:21:10.324 "io_qpairs": 3, 00:21:10.324 "current_admin_qpairs": 0, 00:21:10.324 "current_io_qpairs": 3, 00:21:10.324 "pending_bdev_io": 0, 00:21:10.324 "completed_nvme_io": 29448, 00:21:10.324 "transports": [ 00:21:10.324 { 00:21:10.324 "trtype": "TCP" 00:21:10.324 } 00:21:10.324 ] 00:21:10.324 }, 00:21:10.324 { 00:21:10.324 "name": "nvmf_tgt_poll_group_002", 00:21:10.324 "admin_qpairs": 0, 00:21:10.324 "io_qpairs": 0, 00:21:10.324 "current_admin_qpairs": 0, 00:21:10.324 "current_io_qpairs": 0, 00:21:10.324 "pending_bdev_io": 0, 00:21:10.324 "completed_nvme_io": 0, 00:21:10.324 "transports": [ 00:21:10.324 { 00:21:10.324 "trtype": "TCP" 00:21:10.324 } 00:21:10.324 ] 00:21:10.324 }, 00:21:10.324 { 00:21:10.324 "name": "nvmf_tgt_poll_group_003", 00:21:10.324 "admin_qpairs": 0, 00:21:10.324 "io_qpairs": 0, 00:21:10.324 "current_admin_qpairs": 0, 00:21:10.324 "current_io_qpairs": 0, 00:21:10.324 "pending_bdev_io": 0, 00:21:10.324 "completed_nvme_io": 0, 00:21:10.324 "transports": [ 00:21:10.324 { 00:21:10.324 "trtype": "TCP" 00:21:10.324 } 00:21:10.324 ] 00:21:10.324 } 00:21:10.324 ] 00:21:10.324 }' 00:21:10.324 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:10.324 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:10.324 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:10.324 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:10.324 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 108249 00:21:18.441 Initializing NVMe Controllers 00:21:18.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:18.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:18.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:18.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:18.441 Initialization complete. Launching workers. 00:21:18.441 ======================================================== 00:21:18.441 Latency(us) 00:21:18.441 Device Information : IOPS MiB/s Average min max 00:21:18.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5110.70 19.96 12561.98 1634.74 59736.94 00:21:18.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15637.50 61.08 4092.39 1557.63 6674.63 00:21:18.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5333.70 20.83 12001.45 1857.02 55743.79 00:21:18.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5107.50 19.95 12532.66 1902.05 57460.59 00:21:18.441 ======================================================== 00:21:18.441 Total : 31189.40 121.83 8214.90 1557.63 59736.94 00:21:18.441 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:18.441 rmmod nvme_tcp 00:21:18.441 rmmod nvme_fabrics 00:21:18.441 rmmod nvme_keyring 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 108037 ']' 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 108037 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 108037 ']' 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 108037 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108037 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108037' 00:21:18.441 killing process with pid 108037 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 108037 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 108037 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.441 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:21.731 00:21:21.731 real 0m52.458s 00:21:21.731 user 2m49.832s 00:21:21.731 sys 0m10.360s 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.731 ************************************ 00:21:21.731 END TEST nvmf_perf_adq 00:21:21.731 ************************************ 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:21.731 ************************************ 00:21:21.731 START TEST nvmf_shutdown 00:21:21.731 ************************************ 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:21.731 * Looking for test storage... 00:21:21.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:21.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.731 --rc genhtml_branch_coverage=1 00:21:21.731 --rc genhtml_function_coverage=1 00:21:21.731 --rc genhtml_legend=1 00:21:21.731 --rc geninfo_all_blocks=1 00:21:21.731 --rc geninfo_unexecuted_blocks=1 00:21:21.731 00:21:21.731 ' 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:21.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.731 --rc genhtml_branch_coverage=1 00:21:21.731 --rc genhtml_function_coverage=1 00:21:21.731 --rc genhtml_legend=1 00:21:21.731 --rc geninfo_all_blocks=1 00:21:21.731 --rc geninfo_unexecuted_blocks=1 00:21:21.731 00:21:21.731 ' 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:21.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.731 --rc genhtml_branch_coverage=1 00:21:21.731 --rc genhtml_function_coverage=1 00:21:21.731 --rc genhtml_legend=1 00:21:21.731 --rc geninfo_all_blocks=1 00:21:21.731 --rc geninfo_unexecuted_blocks=1 00:21:21.731 00:21:21.731 ' 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:21.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.731 --rc genhtml_branch_coverage=1 00:21:21.731 --rc genhtml_function_coverage=1 00:21:21.731 --rc genhtml_legend=1 00:21:21.731 --rc geninfo_all_blocks=1 00:21:21.731 --rc geninfo_unexecuted_blocks=1 00:21:21.731 00:21:21.731 ' 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.731 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.732 ************************************ 00:21:21.732 START TEST nvmf_shutdown_tc1 00:21:21.732 ************************************ 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.732 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:28.301 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:28.301 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.301 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:28.302 Found net devices under 0000:af:00.0: cvl_0_0 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:28.302 Found net devices under 0000:af:00.1: cvl_0_1 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:28.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:21:28.302 00:21:28.302 --- 10.0.0.2 ping statistics --- 00:21:28.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.302 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:21:28.302 00:21:28.302 --- 10.0.0.1 ping statistics --- 00:21:28.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.302 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=113621 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 113621 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 113621 ']' 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.302 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.302 [2024-12-10 04:08:27.035930] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:28.302 [2024-12-10 04:08:27.035979] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.302 [2024-12-10 04:08:27.116567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:28.302 [2024-12-10 04:08:27.157895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.302 [2024-12-10 04:08:27.157930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.302 [2024-12-10 04:08:27.157937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.302 [2024-12-10 04:08:27.157943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.302 [2024-12-10 04:08:27.157953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.302 [2024-12-10 04:08:27.159456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.302 [2024-12-10 04:08:27.159567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.302 [2024-12-10 04:08:27.159675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.302 [2024-12-10 04:08:27.159676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.870 [2024-12-10 04:08:27.910892] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:28.870 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.871 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:28.871 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.871 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:28.871 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:28.871 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:28.871 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:28.871 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.871 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.871 Malloc1 00:21:28.871 [2024-12-10 04:08:28.034301] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.871 Malloc2 00:21:28.871 Malloc3 00:21:28.871 Malloc4 00:21:29.129 Malloc5 00:21:29.129 Malloc6 00:21:29.129 Malloc7 00:21:29.129 Malloc8 00:21:29.129 Malloc9 00:21:29.129 Malloc10 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=113891 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 113891 /var/tmp/bdevperf.sock 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 113891 ']' 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.388 { 00:21:29.388 "params": { 00:21:29.388 "name": "Nvme$subsystem", 00:21:29.388 "trtype": "$TEST_TRANSPORT", 00:21:29.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.388 "adrfam": "ipv4", 00:21:29.388 "trsvcid": "$NVMF_PORT", 00:21:29.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.388 "hdgst": ${hdgst:-false}, 00:21:29.388 "ddgst": ${ddgst:-false} 00:21:29.388 }, 00:21:29.388 "method": "bdev_nvme_attach_controller" 00:21:29.388 } 00:21:29.388 EOF 00:21:29.388 )") 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.388 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.388 { 00:21:29.388 "params": { 00:21:29.388 "name": "Nvme$subsystem", 00:21:29.388 "trtype": "$TEST_TRANSPORT", 00:21:29.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.388 "adrfam": "ipv4", 00:21:29.388 "trsvcid": "$NVMF_PORT", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.389 "hdgst": ${hdgst:-false}, 00:21:29.389 "ddgst": ${ddgst:-false} 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 } 00:21:29.389 EOF 00:21:29.389 )") 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.389 { 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme$subsystem", 00:21:29.389 "trtype": "$TEST_TRANSPORT", 00:21:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "$NVMF_PORT", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.389 "hdgst": ${hdgst:-false}, 00:21:29.389 "ddgst": ${ddgst:-false} 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 } 00:21:29.389 EOF 00:21:29.389 )") 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.389 { 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme$subsystem", 00:21:29.389 "trtype": "$TEST_TRANSPORT", 00:21:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "$NVMF_PORT", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.389 "hdgst": ${hdgst:-false}, 00:21:29.389 "ddgst": ${ddgst:-false} 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 } 00:21:29.389 EOF 00:21:29.389 )") 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.389 { 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme$subsystem", 00:21:29.389 "trtype": "$TEST_TRANSPORT", 00:21:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "$NVMF_PORT", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.389 "hdgst": ${hdgst:-false}, 00:21:29.389 "ddgst": ${ddgst:-false} 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 } 00:21:29.389 EOF 00:21:29.389 )") 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.389 { 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme$subsystem", 00:21:29.389 "trtype": "$TEST_TRANSPORT", 00:21:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "$NVMF_PORT", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.389 "hdgst": ${hdgst:-false}, 00:21:29.389 "ddgst": ${ddgst:-false} 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 } 00:21:29.389 EOF 00:21:29.389 )") 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.389 [2024-12-10 04:08:28.510863] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.389 [2024-12-10 04:08:28.510912] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.389 { 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme$subsystem", 00:21:29.389 "trtype": "$TEST_TRANSPORT", 00:21:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "$NVMF_PORT", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.389 "hdgst": ${hdgst:-false}, 00:21:29.389 "ddgst": ${ddgst:-false} 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 } 00:21:29.389 EOF 00:21:29.389 )") 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.389 { 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme$subsystem", 00:21:29.389 "trtype": "$TEST_TRANSPORT", 00:21:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "$NVMF_PORT", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.389 "hdgst": ${hdgst:-false}, 00:21:29.389 "ddgst": ${ddgst:-false} 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 } 00:21:29.389 EOF 00:21:29.389 )") 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.389 { 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme$subsystem", 00:21:29.389 "trtype": "$TEST_TRANSPORT", 00:21:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "$NVMF_PORT", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.389 "hdgst": ${hdgst:-false}, 00:21:29.389 "ddgst": ${ddgst:-false} 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 } 00:21:29.389 EOF 00:21:29.389 )") 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.389 { 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme$subsystem", 00:21:29.389 "trtype": "$TEST_TRANSPORT", 00:21:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "$NVMF_PORT", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.389 "hdgst": ${hdgst:-false}, 00:21:29.389 "ddgst": ${ddgst:-false} 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 } 00:21:29.389 EOF 00:21:29.389 )") 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:29.389 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme1", 00:21:29.389 "trtype": "tcp", 00:21:29.389 "traddr": "10.0.0.2", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "4420", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.389 "hdgst": false, 00:21:29.389 "ddgst": false 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 },{ 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme2", 00:21:29.389 "trtype": "tcp", 00:21:29.389 "traddr": "10.0.0.2", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "4420", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:29.389 "hdgst": false, 00:21:29.389 "ddgst": false 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 },{ 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme3", 00:21:29.389 "trtype": "tcp", 00:21:29.389 "traddr": "10.0.0.2", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "4420", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:29.389 "hdgst": false, 00:21:29.389 "ddgst": false 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 },{ 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme4", 00:21:29.389 "trtype": "tcp", 00:21:29.389 "traddr": "10.0.0.2", 00:21:29.389 "adrfam": "ipv4", 00:21:29.389 "trsvcid": "4420", 00:21:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:29.389 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:29.389 "hdgst": false, 00:21:29.389 "ddgst": false 00:21:29.389 }, 00:21:29.389 "method": "bdev_nvme_attach_controller" 00:21:29.389 },{ 00:21:29.389 "params": { 00:21:29.389 "name": "Nvme5", 00:21:29.390 "trtype": "tcp", 00:21:29.390 "traddr": "10.0.0.2", 00:21:29.390 "adrfam": "ipv4", 00:21:29.390 "trsvcid": "4420", 00:21:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:29.390 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:29.390 "hdgst": false, 00:21:29.390 "ddgst": false 00:21:29.390 }, 00:21:29.390 "method": "bdev_nvme_attach_controller" 00:21:29.390 },{ 00:21:29.390 "params": { 00:21:29.390 "name": "Nvme6", 00:21:29.390 "trtype": "tcp", 00:21:29.390 "traddr": "10.0.0.2", 00:21:29.390 "adrfam": "ipv4", 00:21:29.390 "trsvcid": "4420", 00:21:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:29.390 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:29.390 "hdgst": false, 00:21:29.390 "ddgst": false 00:21:29.390 }, 00:21:29.390 "method": "bdev_nvme_attach_controller" 00:21:29.390 },{ 00:21:29.390 "params": { 00:21:29.390 "name": "Nvme7", 00:21:29.390 "trtype": "tcp", 00:21:29.390 "traddr": "10.0.0.2", 00:21:29.390 "adrfam": "ipv4", 00:21:29.390 "trsvcid": "4420", 00:21:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:29.390 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:29.390 "hdgst": false, 00:21:29.390 "ddgst": false 00:21:29.390 }, 00:21:29.390 "method": "bdev_nvme_attach_controller" 00:21:29.390 },{ 00:21:29.390 "params": { 00:21:29.390 "name": "Nvme8", 00:21:29.390 "trtype": "tcp", 00:21:29.390 "traddr": "10.0.0.2", 00:21:29.390 "adrfam": "ipv4", 00:21:29.390 "trsvcid": "4420", 00:21:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:29.390 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:29.390 "hdgst": false, 00:21:29.390 "ddgst": false 00:21:29.390 }, 00:21:29.390 "method": "bdev_nvme_attach_controller" 00:21:29.390 },{ 00:21:29.390 "params": { 00:21:29.390 "name": "Nvme9", 00:21:29.390 "trtype": "tcp", 00:21:29.390 "traddr": "10.0.0.2", 00:21:29.390 "adrfam": "ipv4", 00:21:29.390 "trsvcid": "4420", 00:21:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:29.390 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:29.390 "hdgst": false, 00:21:29.390 "ddgst": false 00:21:29.390 }, 00:21:29.390 "method": "bdev_nvme_attach_controller" 00:21:29.390 },{ 00:21:29.390 "params": { 00:21:29.390 "name": "Nvme10", 00:21:29.390 "trtype": "tcp", 00:21:29.390 "traddr": "10.0.0.2", 00:21:29.390 "adrfam": "ipv4", 00:21:29.390 "trsvcid": "4420", 00:21:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:29.390 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:29.390 "hdgst": false, 00:21:29.390 "ddgst": false 00:21:29.390 }, 00:21:29.390 "method": "bdev_nvme_attach_controller" 00:21:29.390 }' 00:21:29.390 [2024-12-10 04:08:28.586404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.390 [2024-12-10 04:08:28.627355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.291 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.291 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:31.291 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:31.291 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.291 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:31.291 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.291 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 113891 00:21:31.291 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:31.291 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:32.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 113891 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 113621 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:32.228 { 00:21:32.228 "params": { 00:21:32.228 "name": "Nvme$subsystem", 00:21:32.228 "trtype": "$TEST_TRANSPORT", 00:21:32.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.228 "adrfam": "ipv4", 00:21:32.228 "trsvcid": "$NVMF_PORT", 00:21:32.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.228 "hdgst": ${hdgst:-false}, 00:21:32.228 "ddgst": ${ddgst:-false} 00:21:32.228 }, 00:21:32.228 "method": "bdev_nvme_attach_controller" 00:21:32.228 } 00:21:32.228 EOF 00:21:32.228 )") 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:32.228 { 00:21:32.228 "params": { 00:21:32.228 "name": "Nvme$subsystem", 00:21:32.228 "trtype": "$TEST_TRANSPORT", 00:21:32.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.228 "adrfam": "ipv4", 00:21:32.228 "trsvcid": "$NVMF_PORT", 00:21:32.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.228 "hdgst": ${hdgst:-false}, 00:21:32.228 "ddgst": ${ddgst:-false} 00:21:32.228 }, 00:21:32.228 "method": "bdev_nvme_attach_controller" 00:21:32.228 } 00:21:32.228 EOF 00:21:32.228 )") 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:32.228 { 00:21:32.228 "params": { 00:21:32.228 "name": "Nvme$subsystem", 00:21:32.228 "trtype": "$TEST_TRANSPORT", 00:21:32.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.228 "adrfam": "ipv4", 00:21:32.228 "trsvcid": "$NVMF_PORT", 00:21:32.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.228 "hdgst": ${hdgst:-false}, 00:21:32.228 "ddgst": ${ddgst:-false} 00:21:32.228 }, 00:21:32.228 "method": "bdev_nvme_attach_controller" 00:21:32.228 } 00:21:32.228 EOF 00:21:32.228 )") 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:32.228 { 00:21:32.228 "params": { 00:21:32.228 "name": "Nvme$subsystem", 00:21:32.228 "trtype": "$TEST_TRANSPORT", 00:21:32.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.228 "adrfam": "ipv4", 00:21:32.228 "trsvcid": "$NVMF_PORT", 00:21:32.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.228 "hdgst": ${hdgst:-false}, 00:21:32.228 "ddgst": ${ddgst:-false} 00:21:32.228 }, 00:21:32.228 "method": "bdev_nvme_attach_controller" 00:21:32.228 } 00:21:32.228 EOF 00:21:32.228 )") 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:32.228 { 00:21:32.228 "params": { 00:21:32.228 "name": "Nvme$subsystem", 00:21:32.228 "trtype": "$TEST_TRANSPORT", 00:21:32.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.228 "adrfam": "ipv4", 00:21:32.228 "trsvcid": "$NVMF_PORT", 00:21:32.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.228 "hdgst": ${hdgst:-false}, 00:21:32.228 "ddgst": ${ddgst:-false} 00:21:32.228 }, 00:21:32.228 "method": "bdev_nvme_attach_controller" 00:21:32.228 } 00:21:32.228 EOF 00:21:32.228 )") 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:32.228 { 00:21:32.228 "params": { 00:21:32.228 "name": "Nvme$subsystem", 00:21:32.228 "trtype": "$TEST_TRANSPORT", 00:21:32.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.228 "adrfam": "ipv4", 00:21:32.228 "trsvcid": "$NVMF_PORT", 00:21:32.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.228 "hdgst": ${hdgst:-false}, 00:21:32.228 "ddgst": ${ddgst:-false} 00:21:32.228 }, 00:21:32.228 "method": "bdev_nvme_attach_controller" 00:21:32.228 } 00:21:32.228 EOF 00:21:32.228 )") 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:32.228 { 00:21:32.228 "params": { 00:21:32.228 "name": "Nvme$subsystem", 00:21:32.228 "trtype": "$TEST_TRANSPORT", 00:21:32.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.228 "adrfam": "ipv4", 00:21:32.228 "trsvcid": "$NVMF_PORT", 00:21:32.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.228 "hdgst": ${hdgst:-false}, 00:21:32.228 "ddgst": ${ddgst:-false} 00:21:32.228 }, 00:21:32.228 "method": "bdev_nvme_attach_controller" 00:21:32.228 } 00:21:32.228 EOF 00:21:32.228 )") 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:32.228 [2024-12-10 04:08:31.441360] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:32.228 [2024-12-10 04:08:31.441408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114370 ] 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:32.228 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:32.228 { 00:21:32.228 "params": { 00:21:32.229 "name": "Nvme$subsystem", 00:21:32.229 "trtype": "$TEST_TRANSPORT", 00:21:32.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "$NVMF_PORT", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.229 "hdgst": ${hdgst:-false}, 00:21:32.229 "ddgst": ${ddgst:-false} 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 } 00:21:32.229 EOF 00:21:32.229 )") 00:21:32.229 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:32.229 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:32.229 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:32.229 { 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme$subsystem", 00:21:32.229 "trtype": "$TEST_TRANSPORT", 00:21:32.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "$NVMF_PORT", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.229 "hdgst": ${hdgst:-false}, 00:21:32.229 "ddgst": ${ddgst:-false} 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 } 00:21:32.229 EOF 00:21:32.229 )") 00:21:32.229 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:32.229 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:32.229 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:32.229 { 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme$subsystem", 00:21:32.229 "trtype": "$TEST_TRANSPORT", 00:21:32.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "$NVMF_PORT", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.229 "hdgst": ${hdgst:-false}, 00:21:32.229 "ddgst": ${ddgst:-false} 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 } 00:21:32.229 EOF 00:21:32.229 )") 00:21:32.229 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:32.229 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:32.229 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:32.229 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme1", 00:21:32.229 "trtype": "tcp", 00:21:32.229 "traddr": "10.0.0.2", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "4420", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:32.229 "hdgst": false, 00:21:32.229 "ddgst": false 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 },{ 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme2", 00:21:32.229 "trtype": "tcp", 00:21:32.229 "traddr": "10.0.0.2", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "4420", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:32.229 "hdgst": false, 00:21:32.229 "ddgst": false 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 },{ 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme3", 00:21:32.229 "trtype": "tcp", 00:21:32.229 "traddr": "10.0.0.2", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "4420", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:32.229 "hdgst": false, 00:21:32.229 "ddgst": false 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 },{ 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme4", 00:21:32.229 "trtype": "tcp", 00:21:32.229 "traddr": "10.0.0.2", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "4420", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:32.229 "hdgst": false, 00:21:32.229 "ddgst": false 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 },{ 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme5", 00:21:32.229 "trtype": "tcp", 00:21:32.229 "traddr": "10.0.0.2", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "4420", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:32.229 "hdgst": false, 00:21:32.229 "ddgst": false 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 },{ 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme6", 00:21:32.229 "trtype": "tcp", 00:21:32.229 "traddr": "10.0.0.2", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "4420", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:32.229 "hdgst": false, 00:21:32.229 "ddgst": false 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 },{ 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme7", 00:21:32.229 "trtype": "tcp", 00:21:32.229 "traddr": "10.0.0.2", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "4420", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:32.229 "hdgst": false, 00:21:32.229 "ddgst": false 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 },{ 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme8", 00:21:32.229 "trtype": "tcp", 00:21:32.229 "traddr": "10.0.0.2", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "4420", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:32.229 "hdgst": false, 00:21:32.229 "ddgst": false 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 },{ 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme9", 00:21:32.229 "trtype": "tcp", 00:21:32.229 "traddr": "10.0.0.2", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "4420", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:32.229 "hdgst": false, 00:21:32.229 "ddgst": false 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 },{ 00:21:32.229 "params": { 00:21:32.229 "name": "Nvme10", 00:21:32.229 "trtype": "tcp", 00:21:32.229 "traddr": "10.0.0.2", 00:21:32.229 "adrfam": "ipv4", 00:21:32.229 "trsvcid": "4420", 00:21:32.229 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:32.229 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:32.229 "hdgst": false, 00:21:32.229 "ddgst": false 00:21:32.229 }, 00:21:32.229 "method": "bdev_nvme_attach_controller" 00:21:32.229 }' 00:21:32.489 [2024-12-10 04:08:31.516625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.489 [2024-12-10 04:08:31.556661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.865 Running I/O for 1 seconds... 00:21:34.802 2256.00 IOPS, 141.00 MiB/s 00:21:34.802 Latency(us) 00:21:34.802 [2024-12-10T03:08:34.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.802 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.802 Verification LBA range: start 0x0 length 0x400 00:21:34.802 Nvme1n1 : 1.13 285.94 17.87 0.00 0.00 220768.48 8113.98 213709.78 00:21:34.802 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.802 Verification LBA range: start 0x0 length 0x400 00:21:34.802 Nvme2n1 : 1.13 282.83 17.68 0.00 0.00 220401.37 15166.90 213709.78 00:21:34.802 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.802 Verification LBA range: start 0x0 length 0x400 00:21:34.802 Nvme3n1 : 1.12 289.40 18.09 0.00 0.00 212034.48 6616.02 210713.84 00:21:34.802 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.802 Verification LBA range: start 0x0 length 0x400 00:21:34.802 Nvme4n1 : 1.14 280.12 17.51 0.00 0.00 217187.47 16976.94 226692.14 00:21:34.802 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.802 Verification LBA range: start 0x0 length 0x400 00:21:34.802 Nvme5n1 : 1.09 234.91 14.68 0.00 0.00 254601.75 18350.08 226692.14 00:21:34.802 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.802 Verification LBA range: start 0x0 length 0x400 00:21:34.802 Nvme6n1 : 1.15 278.59 17.41 0.00 0.00 211765.05 15478.98 226692.14 00:21:34.802 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.802 Verification LBA range: start 0x0 length 0x400 00:21:34.802 Nvme7n1 : 1.14 281.26 17.58 0.00 0.00 207052.60 23717.79 218702.99 00:21:34.802 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.802 Verification LBA range: start 0x0 length 0x400 00:21:34.802 Nvme8n1 : 1.12 295.45 18.47 0.00 0.00 192825.09 3479.65 192738.26 00:21:34.802 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.802 Verification LBA range: start 0x0 length 0x400 00:21:34.802 Nvme9n1 : 1.15 278.97 17.44 0.00 0.00 202706.16 17226.61 223696.21 00:21:34.802 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.802 Verification LBA range: start 0x0 length 0x400 00:21:34.802 Nvme10n1 : 1.15 278.03 17.38 0.00 0.00 200441.47 17351.44 235679.94 00:21:34.802 [2024-12-10T03:08:34.088Z] =================================================================================================================== 00:21:34.802 [2024-12-10T03:08:34.088Z] Total : 2785.50 174.09 0.00 0.00 213080.31 3479.65 235679.94 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:35.061 rmmod nvme_tcp 00:21:35.061 rmmod nvme_fabrics 00:21:35.061 rmmod nvme_keyring 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 113621 ']' 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 113621 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 113621 ']' 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 113621 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113621 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113621' 00:21:35.061 killing process with pid 113621 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 113621 00:21:35.061 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 113621 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.629 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.532 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:37.532 00:21:37.532 real 0m15.767s 00:21:37.532 user 0m35.927s 00:21:37.532 sys 0m5.814s 00:21:37.532 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.532 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:37.532 ************************************ 00:21:37.532 END TEST nvmf_shutdown_tc1 00:21:37.532 ************************************ 00:21:37.532 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:37.532 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:37.532 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.532 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:37.792 ************************************ 00:21:37.792 START TEST nvmf_shutdown_tc2 00:21:37.792 ************************************ 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:37.792 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:37.793 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:37.793 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:37.793 Found net devices under 0000:af:00.0: cvl_0_0 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:37.793 Found net devices under 0000:af:00.1: cvl_0_1 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:37.793 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:37.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:21:37.793 00:21:37.793 --- 10.0.0.2 ping statistics --- 00:21:37.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.793 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:21:37.793 00:21:37.793 --- 10.0.0.1 ping statistics --- 00:21:37.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.793 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:37.793 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=115377 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 115377 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 115377 ']' 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.053 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.053 [2024-12-10 04:08:37.163418] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:38.053 [2024-12-10 04:08:37.163460] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.053 [2024-12-10 04:08:37.238698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.053 [2024-12-10 04:08:37.279800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.053 [2024-12-10 04:08:37.279836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.053 [2024-12-10 04:08:37.279843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.053 [2024-12-10 04:08:37.279849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.053 [2024-12-10 04:08:37.279854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.053 [2024-12-10 04:08:37.281207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.053 [2024-12-10 04:08:37.281317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.053 [2024-12-10 04:08:37.281422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.053 [2024-12-10 04:08:37.281423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.312 [2024-12-10 04:08:37.430699] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.312 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.312 Malloc1 00:21:38.312 [2024-12-10 04:08:37.545394] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.312 Malloc2 00:21:38.571 Malloc3 00:21:38.571 Malloc4 00:21:38.571 Malloc5 00:21:38.571 Malloc6 00:21:38.571 Malloc7 00:21:38.571 Malloc8 00:21:38.830 Malloc9 00:21:38.830 Malloc10 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=115592 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 115592 /var/tmp/bdevperf.sock 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 115592 ']' 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.830 { 00:21:38.830 "params": { 00:21:38.830 "name": "Nvme$subsystem", 00:21:38.830 "trtype": "$TEST_TRANSPORT", 00:21:38.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.830 "adrfam": "ipv4", 00:21:38.830 "trsvcid": "$NVMF_PORT", 00:21:38.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.830 "hdgst": ${hdgst:-false}, 00:21:38.830 "ddgst": ${ddgst:-false} 00:21:38.830 }, 00:21:38.830 "method": "bdev_nvme_attach_controller" 00:21:38.830 } 00:21:38.830 EOF 00:21:38.830 )") 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.830 { 00:21:38.830 "params": { 00:21:38.830 "name": "Nvme$subsystem", 00:21:38.830 "trtype": "$TEST_TRANSPORT", 00:21:38.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.830 "adrfam": "ipv4", 00:21:38.830 "trsvcid": "$NVMF_PORT", 00:21:38.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.830 "hdgst": ${hdgst:-false}, 00:21:38.830 "ddgst": ${ddgst:-false} 00:21:38.830 }, 00:21:38.830 "method": "bdev_nvme_attach_controller" 00:21:38.830 } 00:21:38.830 EOF 00:21:38.830 )") 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.830 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.830 { 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme$subsystem", 00:21:38.831 "trtype": "$TEST_TRANSPORT", 00:21:38.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "$NVMF_PORT", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.831 "hdgst": ${hdgst:-false}, 00:21:38.831 "ddgst": ${ddgst:-false} 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 } 00:21:38.831 EOF 00:21:38.831 )") 00:21:38.831 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:38.831 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.831 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.831 { 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme$subsystem", 00:21:38.831 "trtype": "$TEST_TRANSPORT", 00:21:38.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "$NVMF_PORT", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.831 "hdgst": ${hdgst:-false}, 00:21:38.831 "ddgst": ${ddgst:-false} 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 } 00:21:38.831 EOF 00:21:38.831 )") 00:21:38.831 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.831 { 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme$subsystem", 00:21:38.831 "trtype": "$TEST_TRANSPORT", 00:21:38.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "$NVMF_PORT", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.831 "hdgst": ${hdgst:-false}, 00:21:38.831 "ddgst": ${ddgst:-false} 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 } 00:21:38.831 EOF 00:21:38.831 )") 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.831 { 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme$subsystem", 00:21:38.831 "trtype": "$TEST_TRANSPORT", 00:21:38.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "$NVMF_PORT", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.831 "hdgst": ${hdgst:-false}, 00:21:38.831 "ddgst": ${ddgst:-false} 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 } 00:21:38.831 EOF 00:21:38.831 )") 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.831 { 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme$subsystem", 00:21:38.831 "trtype": "$TEST_TRANSPORT", 00:21:38.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "$NVMF_PORT", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.831 "hdgst": ${hdgst:-false}, 00:21:38.831 "ddgst": ${ddgst:-false} 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 } 00:21:38.831 EOF 00:21:38.831 )") 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:38.831 [2024-12-10 04:08:38.019850] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:38.831 [2024-12-10 04:08:38.019900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115592 ] 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.831 { 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme$subsystem", 00:21:38.831 "trtype": "$TEST_TRANSPORT", 00:21:38.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "$NVMF_PORT", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.831 "hdgst": ${hdgst:-false}, 00:21:38.831 "ddgst": ${ddgst:-false} 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 } 00:21:38.831 EOF 00:21:38.831 )") 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.831 { 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme$subsystem", 00:21:38.831 "trtype": "$TEST_TRANSPORT", 00:21:38.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "$NVMF_PORT", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.831 "hdgst": ${hdgst:-false}, 00:21:38.831 "ddgst": ${ddgst:-false} 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 } 00:21:38.831 EOF 00:21:38.831 )") 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.831 { 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme$subsystem", 00:21:38.831 "trtype": "$TEST_TRANSPORT", 00:21:38.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "$NVMF_PORT", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.831 "hdgst": ${hdgst:-false}, 00:21:38.831 "ddgst": ${ddgst:-false} 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 } 00:21:38.831 EOF 00:21:38.831 )") 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:38.831 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme1", 00:21:38.831 "trtype": "tcp", 00:21:38.831 "traddr": "10.0.0.2", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "4420", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.831 "hdgst": false, 00:21:38.831 "ddgst": false 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 },{ 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme2", 00:21:38.831 "trtype": "tcp", 00:21:38.831 "traddr": "10.0.0.2", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "4420", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:38.831 "hdgst": false, 00:21:38.831 "ddgst": false 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 },{ 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme3", 00:21:38.831 "trtype": "tcp", 00:21:38.831 "traddr": "10.0.0.2", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "4420", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:38.831 "hdgst": false, 00:21:38.831 "ddgst": false 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 },{ 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme4", 00:21:38.831 "trtype": "tcp", 00:21:38.831 "traddr": "10.0.0.2", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "4420", 00:21:38.831 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:38.831 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:38.831 "hdgst": false, 00:21:38.831 "ddgst": false 00:21:38.831 }, 00:21:38.831 "method": "bdev_nvme_attach_controller" 00:21:38.831 },{ 00:21:38.831 "params": { 00:21:38.831 "name": "Nvme5", 00:21:38.831 "trtype": "tcp", 00:21:38.831 "traddr": "10.0.0.2", 00:21:38.831 "adrfam": "ipv4", 00:21:38.831 "trsvcid": "4420", 00:21:38.832 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:38.832 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:38.832 "hdgst": false, 00:21:38.832 "ddgst": false 00:21:38.832 }, 00:21:38.832 "method": "bdev_nvme_attach_controller" 00:21:38.832 },{ 00:21:38.832 "params": { 00:21:38.832 "name": "Nvme6", 00:21:38.832 "trtype": "tcp", 00:21:38.832 "traddr": "10.0.0.2", 00:21:38.832 "adrfam": "ipv4", 00:21:38.832 "trsvcid": "4420", 00:21:38.832 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:38.832 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:38.832 "hdgst": false, 00:21:38.832 "ddgst": false 00:21:38.832 }, 00:21:38.832 "method": "bdev_nvme_attach_controller" 00:21:38.832 },{ 00:21:38.832 "params": { 00:21:38.832 "name": "Nvme7", 00:21:38.832 "trtype": "tcp", 00:21:38.832 "traddr": "10.0.0.2", 00:21:38.832 "adrfam": "ipv4", 00:21:38.832 "trsvcid": "4420", 00:21:38.832 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:38.832 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:38.832 "hdgst": false, 00:21:38.832 "ddgst": false 00:21:38.832 }, 00:21:38.832 "method": "bdev_nvme_attach_controller" 00:21:38.832 },{ 00:21:38.832 "params": { 00:21:38.832 "name": "Nvme8", 00:21:38.832 "trtype": "tcp", 00:21:38.832 "traddr": "10.0.0.2", 00:21:38.832 "adrfam": "ipv4", 00:21:38.832 "trsvcid": "4420", 00:21:38.832 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:38.832 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:38.832 "hdgst": false, 00:21:38.832 "ddgst": false 00:21:38.832 }, 00:21:38.832 "method": "bdev_nvme_attach_controller" 00:21:38.832 },{ 00:21:38.832 "params": { 00:21:38.832 "name": "Nvme9", 00:21:38.832 "trtype": "tcp", 00:21:38.832 "traddr": "10.0.0.2", 00:21:38.832 "adrfam": "ipv4", 00:21:38.832 "trsvcid": "4420", 00:21:38.832 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:38.832 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:38.832 "hdgst": false, 00:21:38.832 "ddgst": false 00:21:38.832 }, 00:21:38.832 "method": "bdev_nvme_attach_controller" 00:21:38.832 },{ 00:21:38.832 "params": { 00:21:38.832 "name": "Nvme10", 00:21:38.832 "trtype": "tcp", 00:21:38.832 "traddr": "10.0.0.2", 00:21:38.832 "adrfam": "ipv4", 00:21:38.832 "trsvcid": "4420", 00:21:38.832 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:38.832 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:38.832 "hdgst": false, 00:21:38.832 "ddgst": false 00:21:38.832 }, 00:21:38.832 "method": "bdev_nvme_attach_controller" 00:21:38.832 }' 00:21:38.832 [2024-12-10 04:08:38.097831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.091 [2024-12-10 04:08:38.138120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.467 Running I/O for 10 seconds... 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:40.726 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:40.984 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:40.984 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:40.984 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:40.984 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:40.984 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.984 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:40.984 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=129 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 129 -ge 100 ']' 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 115592 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 115592 ']' 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 115592 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115592 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115592' 00:21:41.244 killing process with pid 115592 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 115592 00:21:41.244 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 115592 00:21:41.244 Received shutdown signal, test time was about 0.768396 seconds 00:21:41.244 00:21:41.244 Latency(us) 00:21:41.244 [2024-12-10T03:08:40.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.244 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.244 Verification LBA range: start 0x0 length 0x400 00:21:41.244 Nvme1n1 : 0.75 255.35 15.96 0.00 0.00 247482.27 17975.59 202724.69 00:21:41.244 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.244 Verification LBA range: start 0x0 length 0x400 00:21:41.244 Nvme2n1 : 0.74 286.48 17.90 0.00 0.00 212180.16 11671.65 211712.49 00:21:41.244 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.244 Verification LBA range: start 0x0 length 0x400 00:21:41.244 Nvme3n1 : 0.76 336.61 21.04 0.00 0.00 179857.19 15229.32 218702.99 00:21:41.244 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.244 Verification LBA range: start 0x0 length 0x400 00:21:41.244 Nvme4n1 : 0.77 333.44 20.84 0.00 0.00 176961.95 14480.34 201726.05 00:21:41.244 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.244 Verification LBA range: start 0x0 length 0x400 00:21:41.244 Nvme5n1 : 0.73 261.37 16.34 0.00 0.00 220979.53 20721.86 214708.42 00:21:41.244 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.244 Verification LBA range: start 0x0 length 0x400 00:21:41.244 Nvme6n1 : 0.75 256.10 16.01 0.00 0.00 220919.71 16976.94 220700.28 00:21:41.244 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.244 Verification LBA range: start 0x0 length 0x400 00:21:41.244 Nvme7n1 : 0.72 265.59 16.60 0.00 0.00 206885.71 25340.59 212711.13 00:21:41.244 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.244 Verification LBA range: start 0x0 length 0x400 00:21:41.244 Nvme8n1 : 0.74 259.49 16.22 0.00 0.00 207243.46 14105.84 216705.71 00:21:41.244 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.244 Verification LBA range: start 0x0 length 0x400 00:21:41.244 Nvme9n1 : 0.76 254.05 15.88 0.00 0.00 207492.31 34453.21 228689.43 00:21:41.244 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.244 Verification LBA range: start 0x0 length 0x400 00:21:41.244 Nvme10n1 : 0.76 253.36 15.83 0.00 0.00 202935.34 18350.08 244667.73 00:21:41.244 [2024-12-10T03:08:40.530Z] =================================================================================================================== 00:21:41.244 [2024-12-10T03:08:40.530Z] Total : 2761.84 172.62 0.00 0.00 206484.40 11671.65 244667.73 00:21:41.503 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 115377 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.437 rmmod nvme_tcp 00:21:42.437 rmmod nvme_fabrics 00:21:42.437 rmmod nvme_keyring 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 115377 ']' 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 115377 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 115377 ']' 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 115377 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.437 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115377 00:21:42.696 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:42.696 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:42.696 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115377' 00:21:42.696 killing process with pid 115377 00:21:42.696 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 115377 00:21:42.696 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 115377 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.954 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:45.490 00:21:45.490 real 0m7.352s 00:21:45.490 user 0m21.732s 00:21:45.490 sys 0m1.303s 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.490 ************************************ 00:21:45.490 END TEST nvmf_shutdown_tc2 00:21:45.490 ************************************ 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:45.490 ************************************ 00:21:45.490 START TEST nvmf_shutdown_tc3 00:21:45.490 ************************************ 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:45.490 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:45.490 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.490 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:45.491 Found net devices under 0000:af:00.0: cvl_0_0 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:45.491 Found net devices under 0000:af:00.1: cvl_0_1 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:45.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:21:45.491 00:21:45.491 --- 10.0.0.2 ping statistics --- 00:21:45.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.491 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:21:45.491 00:21:45.491 --- 10.0.0.1 ping statistics --- 00:21:45.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.491 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=116670 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 116670 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 116670 ']' 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.491 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.491 [2024-12-10 04:08:44.608279] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:45.491 [2024-12-10 04:08:44.608321] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.491 [2024-12-10 04:08:44.688648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.491 [2024-12-10 04:08:44.729993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.491 [2024-12-10 04:08:44.730031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.491 [2024-12-10 04:08:44.730041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.491 [2024-12-10 04:08:44.730049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.491 [2024-12-10 04:08:44.730054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.491 [2024-12-10 04:08:44.731542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.491 [2024-12-10 04:08:44.731578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.491 [2024-12-10 04:08:44.731610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.491 [2024-12-10 04:08:44.731611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.433 [2024-12-10 04:08:45.482994] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.433 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.433 Malloc1 00:21:46.433 [2024-12-10 04:08:45.606449] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.433 Malloc2 00:21:46.433 Malloc3 00:21:46.433 Malloc4 00:21:46.692 Malloc5 00:21:46.692 Malloc6 00:21:46.692 Malloc7 00:21:46.692 Malloc8 00:21:46.692 Malloc9 00:21:46.951 Malloc10 00:21:46.951 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.951 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:46.951 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:46.951 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=116944 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 116944 /var/tmp/bdevperf.sock 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 116944 ']' 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:46.951 { 00:21:46.951 "params": { 00:21:46.951 "name": "Nvme$subsystem", 00:21:46.951 "trtype": "$TEST_TRANSPORT", 00:21:46.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.951 "adrfam": "ipv4", 00:21:46.951 "trsvcid": "$NVMF_PORT", 00:21:46.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.951 "hdgst": ${hdgst:-false}, 00:21:46.951 "ddgst": ${ddgst:-false} 00:21:46.951 }, 00:21:46.951 "method": "bdev_nvme_attach_controller" 00:21:46.951 } 00:21:46.951 EOF 00:21:46.951 )") 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:46.951 { 00:21:46.951 "params": { 00:21:46.951 "name": "Nvme$subsystem", 00:21:46.951 "trtype": "$TEST_TRANSPORT", 00:21:46.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.951 "adrfam": "ipv4", 00:21:46.951 "trsvcid": "$NVMF_PORT", 00:21:46.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.951 "hdgst": ${hdgst:-false}, 00:21:46.951 "ddgst": ${ddgst:-false} 00:21:46.951 }, 00:21:46.951 "method": "bdev_nvme_attach_controller" 00:21:46.951 } 00:21:46.951 EOF 00:21:46.951 )") 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:46.951 { 00:21:46.951 "params": { 00:21:46.951 "name": "Nvme$subsystem", 00:21:46.951 "trtype": "$TEST_TRANSPORT", 00:21:46.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.951 "adrfam": "ipv4", 00:21:46.951 "trsvcid": "$NVMF_PORT", 00:21:46.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.951 "hdgst": ${hdgst:-false}, 00:21:46.951 "ddgst": ${ddgst:-false} 00:21:46.951 }, 00:21:46.951 "method": "bdev_nvme_attach_controller" 00:21:46.951 } 00:21:46.951 EOF 00:21:46.951 )") 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:46.951 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:46.951 { 00:21:46.951 "params": { 00:21:46.951 "name": "Nvme$subsystem", 00:21:46.951 "trtype": "$TEST_TRANSPORT", 00:21:46.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.951 "adrfam": "ipv4", 00:21:46.951 "trsvcid": "$NVMF_PORT", 00:21:46.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.952 "hdgst": ${hdgst:-false}, 00:21:46.952 "ddgst": ${ddgst:-false} 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 } 00:21:46.952 EOF 00:21:46.952 )") 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:46.952 { 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme$subsystem", 00:21:46.952 "trtype": "$TEST_TRANSPORT", 00:21:46.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "$NVMF_PORT", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.952 "hdgst": ${hdgst:-false}, 00:21:46.952 "ddgst": ${ddgst:-false} 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 } 00:21:46.952 EOF 00:21:46.952 )") 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:46.952 { 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme$subsystem", 00:21:46.952 "trtype": "$TEST_TRANSPORT", 00:21:46.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "$NVMF_PORT", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.952 "hdgst": ${hdgst:-false}, 00:21:46.952 "ddgst": ${ddgst:-false} 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 } 00:21:46.952 EOF 00:21:46.952 )") 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:46.952 { 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme$subsystem", 00:21:46.952 "trtype": "$TEST_TRANSPORT", 00:21:46.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "$NVMF_PORT", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.952 "hdgst": ${hdgst:-false}, 00:21:46.952 "ddgst": ${ddgst:-false} 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 } 00:21:46.952 EOF 00:21:46.952 )") 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:46.952 [2024-12-10 04:08:46.084556] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:46.952 [2024-12-10 04:08:46.084604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116944 ] 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:46.952 { 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme$subsystem", 00:21:46.952 "trtype": "$TEST_TRANSPORT", 00:21:46.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "$NVMF_PORT", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.952 "hdgst": ${hdgst:-false}, 00:21:46.952 "ddgst": ${ddgst:-false} 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 } 00:21:46.952 EOF 00:21:46.952 )") 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:46.952 { 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme$subsystem", 00:21:46.952 "trtype": "$TEST_TRANSPORT", 00:21:46.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "$NVMF_PORT", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.952 "hdgst": ${hdgst:-false}, 00:21:46.952 "ddgst": ${ddgst:-false} 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 } 00:21:46.952 EOF 00:21:46.952 )") 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:46.952 { 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme$subsystem", 00:21:46.952 "trtype": "$TEST_TRANSPORT", 00:21:46.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "$NVMF_PORT", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.952 "hdgst": ${hdgst:-false}, 00:21:46.952 "ddgst": ${ddgst:-false} 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 } 00:21:46.952 EOF 00:21:46.952 )") 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:46.952 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme1", 00:21:46.952 "trtype": "tcp", 00:21:46.952 "traddr": "10.0.0.2", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "4420", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:46.952 "hdgst": false, 00:21:46.952 "ddgst": false 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 },{ 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme2", 00:21:46.952 "trtype": "tcp", 00:21:46.952 "traddr": "10.0.0.2", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "4420", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:46.952 "hdgst": false, 00:21:46.952 "ddgst": false 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 },{ 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme3", 00:21:46.952 "trtype": "tcp", 00:21:46.952 "traddr": "10.0.0.2", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "4420", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:46.952 "hdgst": false, 00:21:46.952 "ddgst": false 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 },{ 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme4", 00:21:46.952 "trtype": "tcp", 00:21:46.952 "traddr": "10.0.0.2", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "4420", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:46.952 "hdgst": false, 00:21:46.952 "ddgst": false 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 },{ 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme5", 00:21:46.952 "trtype": "tcp", 00:21:46.952 "traddr": "10.0.0.2", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "4420", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:46.952 "hdgst": false, 00:21:46.952 "ddgst": false 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 },{ 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme6", 00:21:46.952 "trtype": "tcp", 00:21:46.952 "traddr": "10.0.0.2", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "4420", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:46.952 "hdgst": false, 00:21:46.952 "ddgst": false 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 },{ 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme7", 00:21:46.952 "trtype": "tcp", 00:21:46.952 "traddr": "10.0.0.2", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "4420", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:46.952 "hdgst": false, 00:21:46.952 "ddgst": false 00:21:46.952 }, 00:21:46.952 "method": "bdev_nvme_attach_controller" 00:21:46.952 },{ 00:21:46.952 "params": { 00:21:46.952 "name": "Nvme8", 00:21:46.952 "trtype": "tcp", 00:21:46.952 "traddr": "10.0.0.2", 00:21:46.952 "adrfam": "ipv4", 00:21:46.952 "trsvcid": "4420", 00:21:46.952 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:46.952 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:46.952 "hdgst": false, 00:21:46.952 "ddgst": false 00:21:46.952 }, 00:21:46.953 "method": "bdev_nvme_attach_controller" 00:21:46.953 },{ 00:21:46.953 "params": { 00:21:46.953 "name": "Nvme9", 00:21:46.953 "trtype": "tcp", 00:21:46.953 "traddr": "10.0.0.2", 00:21:46.953 "adrfam": "ipv4", 00:21:46.953 "trsvcid": "4420", 00:21:46.953 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:46.953 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:46.953 "hdgst": false, 00:21:46.953 "ddgst": false 00:21:46.953 }, 00:21:46.953 "method": "bdev_nvme_attach_controller" 00:21:46.953 },{ 00:21:46.953 "params": { 00:21:46.953 "name": "Nvme10", 00:21:46.953 "trtype": "tcp", 00:21:46.953 "traddr": "10.0.0.2", 00:21:46.953 "adrfam": "ipv4", 00:21:46.953 "trsvcid": "4420", 00:21:46.953 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:46.953 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:46.953 "hdgst": false, 00:21:46.953 "ddgst": false 00:21:46.953 }, 00:21:46.953 "method": "bdev_nvme_attach_controller" 00:21:46.953 }' 00:21:46.953 [2024-12-10 04:08:46.159137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.953 [2024-12-10 04:08:46.198835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.856 Running I/O for 10 seconds... 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:48.856 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.856 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:48.856 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:48.856 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:49.115 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:49.115 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:49.115 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:49.115 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.115 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.115 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:49.115 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.115 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=80 00:21:49.115 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 80 -ge 100 ']' 00:21:49.115 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=199 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 199 -ge 100 ']' 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 116670 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 116670 ']' 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 116670 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.374 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116670 00:21:49.648 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:49.648 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:49.648 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116670' 00:21:49.648 killing process with pid 116670 00:21:49.648 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 116670 00:21:49.648 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 116670 00:21:49.648 [2024-12-10 04:08:48.670089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.648 [2024-12-10 04:08:48.670469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.670548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001840 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.672846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d10 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.675032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.675056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.675064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.675072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.675078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.675084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.675094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.675101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.675106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.649 [2024-12-10 04:08:48.675113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.675444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10026d0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.650 [2024-12-10 04:08:48.676357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.676557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002ba0 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.651 [2024-12-10 04:08:48.677766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.677773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.677779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.677785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.677791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.677798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.677805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.677810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.677816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.677822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003070 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.678995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.652 [2024-12-10 04:08:48.679160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.679267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003540 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.653 [2024-12-10 04:08:48.680440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.654 [2024-12-10 04:08:48.680446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.654 [2024-12-10 04:08:48.680452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.654 [2024-12-10 04:08:48.680458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003a30 is same with the state(6) to be set 00:21:49.654 [2024-12-10 04:08:48.680938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.680967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.680977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.680984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.680992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b994d0 is same with the state(6) to be set 00:21:49.654 [2024-12-10 04:08:48.681053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aba610 is same with the state(6) to be set 00:21:49.654 [2024-12-10 04:08:48.681150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012f10 is same with the state(6) to be set 00:21:49.654 [2024-12-10 04:08:48.681240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffb950 is same with the state(6) to be set 00:21:49.654 [2024-12-10 04:08:48.681325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012020 is same with the state(6) to be set 00:21:49.654 [2024-12-10 04:08:48.681406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc6290 is same with the state(6) to be set 00:21:49.654 [2024-12-10 04:08:48.681487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9a950 is same with the state(6) to be set 00:21:49.654 [2024-12-10 04:08:48.681568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.654 [2024-12-10 04:08:48.681584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.654 [2024-12-10 04:08:48.681590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.681599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.655 [2024-12-10 04:08:48.681606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.681613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.655 [2024-12-10 04:08:48.681619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.681626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b992d0 is same with the state(6) to be set 00:21:49.655 [2024-12-10 04:08:48.681649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.655 [2024-12-10 04:08:48.681658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.681665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.655 [2024-12-10 04:08:48.681672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.681680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.655 [2024-12-10 04:08:48.681686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.681693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.655 [2024-12-10 04:08:48.681700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.681706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5000 is same with the state(6) to be set 00:21:49.655 [2024-12-10 04:08:48.681732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.655 [2024-12-10 04:08:48.681742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.681751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.655 [2024-12-10 04:08:48.681758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.681765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.655 [2024-12-10 04:08:48.681773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.681780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.655 [2024-12-10 04:08:48.681787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.681794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5490 is same with the state(6) to be set 00:21:49.655 [2024-12-10 04:08:48.682139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.655 [2024-12-10 04:08:48.682571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.655 [2024-12-10 04:08:48.682579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.682988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.682995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.683005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.683011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.683019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.683026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.683034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.683041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.683048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.683056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.683065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.683072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.683079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.683087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.683095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.683101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.656 [2024-12-10 04:08:48.683109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.656 [2024-12-10 04:08:48.683115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.657 [2024-12-10 04:08:48.683696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.683987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.683996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.684007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.684016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.684022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.684030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.684037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.684045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.684051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.684061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.684067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.684075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.684082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.684091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.684098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.684106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.684114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.684122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.684129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.684137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.657 [2024-12-10 04:08:48.684144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.657 [2024-12-10 04:08:48.684152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.684159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.684174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.694770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.694795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.694816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.694837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.694858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.694878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.694899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.694920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.694946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.694967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.694988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.694997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:49.658 [2024-12-10 04:08:48.695645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.658 [2024-12-10 04:08:48.695659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.658 [2024-12-10 04:08:48.695674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.695981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.695994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.659 [2024-12-10 04:08:48.696361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.659 [2024-12-10 04:08:48.696370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.696988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.660 [2024-12-10 04:08:48.696999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.660 [2024-12-10 04:08:48.697009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea160 is same with the state(6) to be set 00:21:49.660 [2024-12-10 04:08:48.697128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b994d0 (9): Bad file descriptor 00:21:49.660 [2024-12-10 04:08:48.697155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aba610 (9): Bad file descriptor 00:21:49.660 [2024-12-10 04:08:48.697183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2012f10 (9): Bad file descriptor 00:21:49.660 [2024-12-10 04:08:48.697200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffb950 (9): Bad file descriptor 00:21:49.660 [2024-12-10 04:08:48.697220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2012020 (9): Bad file descriptor 00:21:49.660 [2024-12-10 04:08:48.697235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6290 (9): Bad file descriptor 00:21:49.660 [2024-12-10 04:08:48.697256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9a950 (9): Bad file descriptor 00:21:49.660 [2024-12-10 04:08:48.697276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b992d0 (9): Bad file descriptor 00:21:49.660 [2024-12-10 04:08:48.697295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba5000 (9): Bad file descriptor 00:21:49.660 [2024-12-10 04:08:48.697312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba5490 (9): Bad file descriptor 00:21:49.660 [2024-12-10 04:08:48.701444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:49.660 [2024-12-10 04:08:48.702014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:49.660 [2024-12-10 04:08:48.702184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.660 [2024-12-10 04:08:48.702214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba5000 with addr=10.0.0.2, port=4420 00:21:49.660 [2024-12-10 04:08:48.702230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5000 is same with the state(6) to be set 00:21:49.660 [2024-12-10 04:08:48.703693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:49.660 [2024-12-10 04:08:48.703882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.660 [2024-12-10 04:08:48.703907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2012f10 with addr=10.0.0.2, port=4420 00:21:49.660 [2024-12-10 04:08:48.703921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012f10 is same with the state(6) to be set 00:21:49.660 [2024-12-10 04:08:48.703939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba5000 (9): Bad file descriptor 00:21:49.660 [2024-12-10 04:08:48.704007] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:49.660 [2024-12-10 04:08:48.704069] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:49.660 [2024-12-10 04:08:48.704142] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:49.660 [2024-12-10 04:08:48.704221] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:49.661 [2024-12-10 04:08:48.704282] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:49.661 [2024-12-10 04:08:48.704343] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:49.661 [2024-12-10 04:08:48.704559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.661 [2024-12-10 04:08:48.704581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffb950 with addr=10.0.0.2, port=4420 00:21:49.661 [2024-12-10 04:08:48.704594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffb950 is same with the state(6) to be set 00:21:49.661 [2024-12-10 04:08:48.704611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2012f10 (9): Bad file descriptor 00:21:49.661 [2024-12-10 04:08:48.704627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:49.661 [2024-12-10 04:08:48.704638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:49.661 [2024-12-10 04:08:48.704652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:49.661 [2024-12-10 04:08:48.704666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:49.661 [2024-12-10 04:08:48.704773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.704801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.704821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.704834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.704850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.704862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.704877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.704890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.704904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.704917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.704931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.704942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.704956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.704968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.704983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.704996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.661 [2024-12-10 04:08:48.705708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.661 [2024-12-10 04:08:48.705720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.705735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.705745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.705760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.705773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.705787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.705798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.705820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.705833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.705846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.705858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.705872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.705883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.705898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.705909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.705924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.705939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.705954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.705973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.705987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.705999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.662 [2024-12-10 04:08:48.706498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.662 [2024-12-10 04:08:48.706511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faae40 is same with the state(6) to be set 00:21:49.662 [2024-12-10 04:08:48.706660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffb950 (9): Bad file descriptor 00:21:49.662 [2024-12-10 04:08:48.706679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:49.662 [2024-12-10 04:08:48.706691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:49.662 [2024-12-10 04:08:48.706703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:49.662 [2024-12-10 04:08:48.706714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:49.662 [2024-12-10 04:08:48.708269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:49.662 [2024-12-10 04:08:48.708310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:49.662 [2024-12-10 04:08:48.708324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:49.662 [2024-12-10 04:08:48.708336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:49.662 [2024-12-10 04:08:48.708348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:49.662 [2024-12-10 04:08:48.708634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.663 [2024-12-10 04:08:48.708672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b992d0 with addr=10.0.0.2, port=4420 00:21:49.663 [2024-12-10 04:08:48.708686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b992d0 is same with the state(6) to be set 00:21:49.663 [2024-12-10 04:08:48.708746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.708762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.708781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.708794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.708809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.708822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.708836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.708848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.708863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.708875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.708889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.708906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.708922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.708934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.708948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.708960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.708974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.708986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.663 [2024-12-10 04:08:48.709797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.663 [2024-12-10 04:08:48.709812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.709823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.709844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.709855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.709869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.709880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.709895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.709907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.709924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.709936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.709951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.709963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.709977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.709989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.710475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.710489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da93d0 is same with the state(6) to be set 00:21:49.664 [2024-12-10 04:08:48.711828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.711843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.711856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.711864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.711876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.711885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.711895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.711903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.711916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.711925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.711935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.711943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.711953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.711961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.711971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.711979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.711989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.711997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.712006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.712014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.712023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.712032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.712041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.712049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.712059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.712068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.712077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.712085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.712096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.664 [2024-12-10 04:08:48.712105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.664 [2024-12-10 04:08:48.712115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.665 [2024-12-10 04:08:48.712853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.665 [2024-12-10 04:08:48.712861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.712870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.712878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.712887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.712895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.712905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.712913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.712922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.712930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.712939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.712946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.712956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.712964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.712975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.712983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.712992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.713000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.713009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daa5c0 is same with the state(6) to be set 00:21:49.666 [2024-12-10 04:08:48.714119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.666 [2024-12-10 04:08:48.714618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.666 [2024-12-10 04:08:48.714633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.714989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.714998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.715295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.715303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa9ac0 is same with the state(6) to be set 00:21:49.667 [2024-12-10 04:08:48.716649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.716666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.716680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.716688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.716699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.667 [2024-12-10 04:08:48.716708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.667 [2024-12-10 04:08:48.716719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.716984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.716994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.668 [2024-12-10 04:08:48.717462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.668 [2024-12-10 04:08:48.717469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.717832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.717840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fac1c0 is same with the state(6) to be set 00:21:49.669 [2024-12-10 04:08:48.718954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.718971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.718984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.718992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.669 [2024-12-10 04:08:48.719308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.669 [2024-12-10 04:08:48.719319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.719985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.719992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.720003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.720011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.720020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.720027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.670 [2024-12-10 04:08:48.720037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.670 [2024-12-10 04:08:48.720044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.720054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.720061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.720072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.720081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.720091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.720099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.720109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.720117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.720126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2ca6d70 is same with the state(6) to be set 00:21:49.671 [2024-12-10 04:08:48.721246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.671 [2024-12-10 04:08:48.721779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.671 [2024-12-10 04:08:48.721787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.721987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.721997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.672 [2024-12-10 04:08:48.722314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.672 [2024-12-10 04:08:48.722321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2ef46d0 is same with the state(6) to be set 00:21:49.672 [2024-12-10 04:08:48.723282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:49.672 [2024-12-10 04:08:48.723302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:49.672 [2024-12-10 04:08:48.723314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:49.672 [2024-12-10 04:08:48.723326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:49.672 [2024-12-10 04:08:48.723365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b992d0 (9): Bad file descriptor 00:21:49.672 [2024-12-10 04:08:48.723414] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:49.672 [2024-12-10 04:08:48.723428] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:49.672 [2024-12-10 04:08:48.723443] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:49.672 [2024-12-10 04:08:48.723513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:49.672 task offset: 34944 on job bdev=Nvme3n1 fails 00:21:49.672 00:21:49.672 Latency(us) 00:21:49.672 [2024-12-10T03:08:48.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.672 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.672 Job: Nvme1n1 ended in about 0.94 seconds with error 00:21:49.672 Verification LBA range: start 0x0 length 0x400 00:21:49.672 Nvme1n1 : 0.94 207.76 12.98 67.84 0.00 229785.54 15915.89 236678.58 00:21:49.672 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.672 Job: Nvme2n1 ended in about 0.95 seconds with error 00:21:49.673 Verification LBA range: start 0x0 length 0x400 00:21:49.673 Nvme2n1 : 0.95 203.00 12.69 67.67 0.00 230041.84 17601.10 214708.42 00:21:49.673 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.673 Job: Nvme3n1 ended in about 0.93 seconds with error 00:21:49.673 Verification LBA range: start 0x0 length 0x400 00:21:49.673 Nvme3n1 : 0.93 275.18 17.20 68.79 0.00 177781.03 14605.17 213709.78 00:21:49.673 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.673 Job: Nvme4n1 ended in about 0.95 seconds with error 00:21:49.673 Verification LBA range: start 0x0 length 0x400 00:21:49.673 Nvme4n1 : 0.95 206.73 12.92 67.51 0.00 219450.06 18474.91 216705.71 00:21:49.673 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.673 Job: Nvme5n1 ended in about 0.94 seconds with error 00:21:49.673 Verification LBA range: start 0x0 length 0x400 00:21:49.673 Nvme5n1 : 0.94 204.33 12.77 68.11 0.00 216912.21 17351.44 222697.57 00:21:49.673 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.673 Job: Nvme6n1 ended in about 0.95 seconds with error 00:21:49.673 Verification LBA range: start 0x0 length 0x400 00:21:49.673 Nvme6n1 : 0.95 207.24 12.95 67.33 0.00 211668.49 19473.55 231685.36 00:21:49.673 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.673 Job: Nvme7n1 ended in about 0.95 seconds with error 00:21:49.673 Verification LBA range: start 0x0 length 0x400 00:21:49.673 Nvme7n1 : 0.95 201.49 12.59 67.16 0.00 212480.24 19972.88 223696.21 00:21:49.673 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.673 Job: Nvme8n1 ended in about 0.96 seconds with error 00:21:49.673 Verification LBA range: start 0x0 length 0x400 00:21:49.673 Nvme8n1 : 0.96 201.04 12.56 67.01 0.00 209178.33 14230.67 210713.84 00:21:49.673 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.673 Job: Nvme9n1 ended in about 0.93 seconds with error 00:21:49.673 Verification LBA range: start 0x0 length 0x400 00:21:49.673 Nvme9n1 : 0.93 206.06 12.88 68.69 0.00 199447.89 18100.42 241671.80 00:21:49.673 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.673 Job: Nvme10n1 ended in about 0.93 seconds with error 00:21:49.673 Verification LBA range: start 0x0 length 0x400 00:21:49.673 Nvme10n1 : 0.93 205.81 12.86 68.60 0.00 195953.86 16976.94 259647.39 00:21:49.673 [2024-12-10T03:08:48.959Z] =================================================================================================================== 00:21:49.673 [2024-12-10T03:08:48.959Z] Total : 2118.63 132.41 678.71 0.00 209527.62 14230.67 259647.39 00:21:49.673 [2024-12-10 04:08:48.755983] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:49.673 [2024-12-10 04:08:48.756030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:49.673 [2024-12-10 04:08:48.756352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.673 [2024-12-10 04:08:48.756373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba5490 with addr=10.0.0.2, port=4420 00:21:49.673 [2024-12-10 04:08:48.756384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5490 is same with the state(6) to be set 00:21:49.673 [2024-12-10 04:08:48.756602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.673 [2024-12-10 04:08:48.756615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b994d0 with addr=10.0.0.2, port=4420 00:21:49.673 [2024-12-10 04:08:48.756623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b994d0 is same with the state(6) to be set 00:21:49.673 [2024-12-10 04:08:48.756711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.673 [2024-12-10 04:08:48.756724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9a950 with addr=10.0.0.2, port=4420 00:21:49.673 [2024-12-10 04:08:48.756732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9a950 is same with the state(6) to be set 00:21:49.673 [2024-12-10 04:08:48.756950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.673 [2024-12-10 04:08:48.756962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc6290 with addr=10.0.0.2, port=4420 00:21:49.673 [2024-12-10 04:08:48.756979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc6290 is same with the state(6) to be set 00:21:49.673 [2024-12-10 04:08:48.756986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:49.673 [2024-12-10 04:08:48.756994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:49.673 [2024-12-10 04:08:48.757003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:49.673 [2024-12-10 04:08:48.757014] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:49.673 [2024-12-10 04:08:48.758390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:49.673 [2024-12-10 04:08:48.758413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:49.673 [2024-12-10 04:08:48.758423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:49.673 [2024-12-10 04:08:48.758696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.673 [2024-12-10 04:08:48.758713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aba610 with addr=10.0.0.2, port=4420 00:21:49.673 [2024-12-10 04:08:48.758722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aba610 is same with the state(6) to be set 00:21:49.673 [2024-12-10 04:08:48.758939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.673 [2024-12-10 04:08:48.758951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2012020 with addr=10.0.0.2, port=4420 00:21:49.673 [2024-12-10 04:08:48.758959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012020 is same with the state(6) to be set 00:21:49.673 [2024-12-10 04:08:48.758972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba5490 (9): Bad file descriptor 00:21:49.673 [2024-12-10 04:08:48.758984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b994d0 (9): Bad file descriptor 00:21:49.673 [2024-12-10 04:08:48.758993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9a950 (9): Bad file descriptor 00:21:49.673 [2024-12-10 04:08:48.759003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6290 (9): Bad file descriptor 00:21:49.673 [2024-12-10 04:08:48.759042] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:49.673 [2024-12-10 04:08:48.759054] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:49.673 [2024-12-10 04:08:48.759064] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:49.673 [2024-12-10 04:08:48.759075] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:49.673 [2024-12-10 04:08:48.759360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.673 [2024-12-10 04:08:48.759376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba5000 with addr=10.0.0.2, port=4420 00:21:49.673 [2024-12-10 04:08:48.759385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5000 is same with the state(6) to be set 00:21:49.673 [2024-12-10 04:08:48.759469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.673 [2024-12-10 04:08:48.759481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2012f10 with addr=10.0.0.2, port=4420 00:21:49.673 [2024-12-10 04:08:48.759489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012f10 is same with the state(6) to be set 00:21:49.673 [2024-12-10 04:08:48.759614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.673 [2024-12-10 04:08:48.759629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffb950 with addr=10.0.0.2, port=4420 00:21:49.673 [2024-12-10 04:08:48.759637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffb950 is same with the state(6) to be set 00:21:49.673 [2024-12-10 04:08:48.759647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aba610 (9): Bad file descriptor 00:21:49.673 [2024-12-10 04:08:48.759657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2012020 (9): Bad file descriptor 00:21:49.673 [2024-12-10 04:08:48.759666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:49.673 [2024-12-10 04:08:48.759673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:49.673 [2024-12-10 04:08:48.759680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:49.673 [2024-12-10 04:08:48.759688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:49.673 [2024-12-10 04:08:48.759696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:49.673 [2024-12-10 04:08:48.759703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:49.673 [2024-12-10 04:08:48.759710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:49.673 [2024-12-10 04:08:48.759716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:49.673 [2024-12-10 04:08:48.759723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:49.673 [2024-12-10 04:08:48.759730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:49.673 [2024-12-10 04:08:48.759736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:49.673 [2024-12-10 04:08:48.759743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:49.673 [2024-12-10 04:08:48.759751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:49.673 [2024-12-10 04:08:48.759757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:49.673 [2024-12-10 04:08:48.759763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:49.673 [2024-12-10 04:08:48.759770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:49.673 [2024-12-10 04:08:48.759844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:49.673 [2024-12-10 04:08:48.759866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba5000 (9): Bad file descriptor 00:21:49.673 [2024-12-10 04:08:48.759876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2012f10 (9): Bad file descriptor 00:21:49.673 [2024-12-10 04:08:48.759886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffb950 (9): Bad file descriptor 00:21:49.673 [2024-12-10 04:08:48.759893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:49.673 [2024-12-10 04:08:48.759899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:49.674 [2024-12-10 04:08:48.759907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:49.674 [2024-12-10 04:08:48.759914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:49.674 [2024-12-10 04:08:48.759923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:49.674 [2024-12-10 04:08:48.759929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:49.674 [2024-12-10 04:08:48.759936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:49.674 [2024-12-10 04:08:48.759942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:49.674 [2024-12-10 04:08:48.760184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.674 [2024-12-10 04:08:48.760198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b992d0 with addr=10.0.0.2, port=4420 00:21:49.674 [2024-12-10 04:08:48.760206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b992d0 is same with the state(6) to be set 00:21:49.674 [2024-12-10 04:08:48.760214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:49.674 [2024-12-10 04:08:48.760221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:49.674 [2024-12-10 04:08:48.760228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:49.674 [2024-12-10 04:08:48.760235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:49.674 [2024-12-10 04:08:48.760243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:49.674 [2024-12-10 04:08:48.760249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:49.674 [2024-12-10 04:08:48.760256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:49.674 [2024-12-10 04:08:48.760263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:49.674 [2024-12-10 04:08:48.760271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:49.674 [2024-12-10 04:08:48.760277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:49.674 [2024-12-10 04:08:48.760284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:49.674 [2024-12-10 04:08:48.760290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:49.674 [2024-12-10 04:08:48.760315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b992d0 (9): Bad file descriptor 00:21:49.674 [2024-12-10 04:08:48.760340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:49.674 [2024-12-10 04:08:48.760348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:49.674 [2024-12-10 04:08:48.760355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:49.674 [2024-12-10 04:08:48.760361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:49.933 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:50.869 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 116944 00:21:50.869 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 116944 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 116944 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.870 rmmod nvme_tcp 00:21:50.870 rmmod nvme_fabrics 00:21:50.870 rmmod nvme_keyring 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 116670 ']' 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 116670 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 116670 ']' 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 116670 00:21:50.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (116670) - No such process 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 116670 is not found' 00:21:50.870 Process with pid 116670 is not found 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.870 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.129 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.129 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.129 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.129 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.129 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.033 00:21:53.033 real 0m7.976s 00:21:53.033 user 0m20.108s 00:21:53.033 sys 0m1.391s 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:53.033 ************************************ 00:21:53.033 END TEST nvmf_shutdown_tc3 00:21:53.033 ************************************ 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:53.033 ************************************ 00:21:53.033 START TEST nvmf_shutdown_tc4 00:21:53.033 ************************************ 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.033 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:53.292 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:53.292 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:53.292 Found net devices under 0000:af:00.0: cvl_0_0 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:53.292 Found net devices under 0000:af:00.1: cvl_0_1 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:53.292 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:53.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:21:53.293 00:21:53.293 --- 10.0.0.2 ping statistics --- 00:21:53.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.293 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:21:53.293 00:21:53.293 --- 10.0.0.1 ping statistics --- 00:21:53.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.293 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.293 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=118179 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 118179 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 118179 ']' 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.552 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:53.552 [2024-12-10 04:08:52.674295] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:21:53.552 [2024-12-10 04:08:52.674339] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.552 [2024-12-10 04:08:52.753428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:53.552 [2024-12-10 04:08:52.793575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.552 [2024-12-10 04:08:52.793617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.552 [2024-12-10 04:08:52.793624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.552 [2024-12-10 04:08:52.793630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.552 [2024-12-10 04:08:52.793635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.552 [2024-12-10 04:08:52.795134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.552 [2024-12-10 04:08:52.795240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.552 [2024-12-10 04:08:52.795345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.552 [2024-12-10 04:08:52.795345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:54.489 [2024-12-10 04:08:53.565688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.489 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:54.490 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.490 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:54.490 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:54.490 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:54.490 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:54.490 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.490 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:54.490 Malloc1 00:21:54.490 [2024-12-10 04:08:53.675948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.490 Malloc2 00:21:54.490 Malloc3 00:21:54.748 Malloc4 00:21:54.748 Malloc5 00:21:54.748 Malloc6 00:21:54.748 Malloc7 00:21:54.748 Malloc8 00:21:54.748 Malloc9 00:21:55.007 Malloc10 00:21:55.007 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.007 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:55.007 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.007 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:55.007 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=118455 00:21:55.007 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:55.007 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:55.007 [2024-12-10 04:08:54.182765] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:00.287 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.288 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 118179 00:22:00.288 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 118179 ']' 00:22:00.288 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 118179 00:22:00.288 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:00.288 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.288 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118179 00:22:00.288 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:00.288 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:00.288 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118179' 00:22:00.288 killing process with pid 118179 00:22:00.288 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 118179 00:22:00.288 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 118179 00:22:00.288 [2024-12-10 04:08:59.180528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc692c0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.180578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc692c0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.180586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc692c0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.180593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc692c0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.180600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc692c0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.180606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc692c0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.180615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc692c0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.180621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc692c0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.183559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67f80 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.183588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67f80 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.183595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67f80 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.183602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67f80 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.183609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67f80 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.183616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67f80 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.183623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67f80 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.183630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67f80 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.184417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc68450 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.184449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc68450 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.184456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc68450 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.184464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc68450 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.184471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc68450 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.184477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc68450 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.186031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67ab0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.186060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67ab0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.186075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67ab0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.186082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67ab0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.186089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67ab0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.186095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67ab0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.186102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67ab0 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.186107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67ab0 is same with the state(6) to be set 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 [2024-12-10 04:08:59.189052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 starting I/O failed: -6 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 [2024-12-10 04:08:59.189696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73660 is same with tstarting I/O failed: -6 00:22:00.288 he state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.189721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73660 is same with tWrite completed with error (sct=0, sc=8) 00:22:00.288 he state(6) to be set 00:22:00.288 starting I/O failed: -6 00:22:00.288 [2024-12-10 04:08:59.189729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73660 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.189737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73660 is same with the state(6) to be set 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 [2024-12-10 04:08:59.189744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73660 is same with the state(6) to be set 00:22:00.288 [2024-12-10 04:08:59.189751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73660 is same with the state(6) to be set 00:22:00.288 Write completed with error (sct=0, sc=8) 00:22:00.288 [2024-12-10 04:08:59.189758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73660 is same with the state(6) to be set 00:22:00.289 [2024-12-10 04:08:59.189765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73660 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 [2024-12-10 04:08:59.189976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:00.289 starting I/O failed: -6 00:22:00.289 [2024-12-10 04:08:59.190076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73b50 is same with the state(6) to be set 00:22:00.289 [2024-12-10 04:08:59.190098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73b50 is same with tWrite completed with error (sct=0, sc=8) 00:22:00.289 he state(6) to be set 00:22:00.289 starting I/O failed: -6 00:22:00.289 [2024-12-10 04:08:59.190107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73b50 is same with the state(6) to be set 00:22:00.289 [2024-12-10 04:08:59.190114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73b50 is same with the state(6) to be set 00:22:00.289 [2024-12-10 04:08:59.190120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73b50 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 [2024-12-10 04:08:59.190126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73b50 is same with the state(6) to be set 00:22:00.289 [2024-12-10 04:08:59.190132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73b50 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 [2024-12-10 04:08:59.190139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73b50 is same with tstarting I/O failed: -6 00:22:00.289 he state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 [2024-12-10 04:08:59.190547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74040 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 [2024-12-10 04:08:59.190571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74040 is same with the state(6) to be set 00:22:00.289 [2024-12-10 04:08:59.190579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74040 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 [2024-12-10 04:08:59.190585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74040 is same with the state(6) to be set 00:22:00.289 starting I/O failed: -6 00:22:00.289 [2024-12-10 04:08:59.190592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74040 is same with the state(6) to be set 00:22:00.289 [2024-12-10 04:08:59.190598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74040 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 [2024-12-10 04:08:59.190605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74040 is same with the state(6) to be set 00:22:00.289 starting I/O failed: -6 00:22:00.289 [2024-12-10 04:08:59.190612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74040 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 [2024-12-10 04:08:59.190896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec560 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 [2024-12-10 04:08:59.190919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec560 is same with the state(6) to be set 00:22:00.289 starting I/O failed: -6 00:22:00.289 [2024-12-10 04:08:59.190927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec560 is same with the state(6) to be set 00:22:00.289 [2024-12-10 04:08:59.190933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec560 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 [2024-12-10 04:08:59.190944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec560 is same with the state(6) to be set 00:22:00.289 [2024-12-10 04:08:59.190951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec560 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 [2024-12-10 04:08:59.191021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 [2024-12-10 04:08:59.191527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 starting I/O failed: -6 00:22:00.289 [2024-12-10 04:08:59.191541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(6) to be set 00:22:00.289 [2024-12-10 04:08:59.191548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(6) to be set 00:22:00.289 Write completed with error (sct=0, sc=8) 00:22:00.289 [2024-12-10 04:08:59.191556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(6) to be set 00:22:00.289 starting I/O failed: -6 00:22:00.289 [2024-12-10 04:08:59.191563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.191570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(6) to be set 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 [2024-12-10 04:08:59.191576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(6) to be set 00:22:00.290 starting I/O failed: -6 00:22:00.290 [2024-12-10 04:08:59.191583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(6) to be set 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 [2024-12-10 04:08:59.192242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa743e0 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.192257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa743e0 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.192264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa743e0 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.192271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa743e0 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.192278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa743e0 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.192284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa743e0 is same with the state(6) to be set 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 [2024-12-10 04:08:59.192568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.290 NVMe io qpair process completion error 00:22:00.290 [2024-12-10 04:08:59.194678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78710 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.194699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78710 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.194705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78710 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.194712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78710 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.194719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78710 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.194725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78710 is same with the state(6) to be set 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 [2024-12-10 04:08:59.195344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa77d50 is same with the state(6) to be set 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 [2024-12-10 04:08:59.195367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa77d50 is same with the state(6) to be set 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 [2024-12-10 04:08:59.195375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa77d50 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.195383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa77d50 is same with the state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.195389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa77d50 is same with tWrite completed with error (sct=0, sc=8) 00:22:00.290 he state(6) to be set 00:22:00.290 [2024-12-10 04:08:59.195397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa77d50 is same with the state(6) to be set 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 starting I/O failed: -6 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.290 Write completed with error (sct=0, sc=8) 00:22:00.290 starting I/O failed: -6 00:22:00.291 [2024-12-10 04:08:59.196388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.291 NVMe io qpair process completion error 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 [2024-12-10 04:08:59.197275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 [2024-12-10 04:08:59.198191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 [2024-12-10 04:08:59.199204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.291 Write completed with error (sct=0, sc=8) 00:22:00.291 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 [2024-12-10 04:08:59.200719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.292 NVMe io qpair process completion error 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 [2024-12-10 04:08:59.201660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 Write completed with error (sct=0, sc=8) 00:22:00.292 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 [2024-12-10 04:08:59.202560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 [2024-12-10 04:08:59.203552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.293 starting I/O failed: -6 00:22:00.293 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 [2024-12-10 04:08:59.205441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.294 NVMe io qpair process completion error 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 [2024-12-10 04:08:59.206427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 [2024-12-10 04:08:59.207357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.294 starting I/O failed: -6 00:22:00.294 [2024-12-10 04:08:59.208340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:00.294 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 [2024-12-10 04:08:59.211981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.295 NVMe io qpair process completion error 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 [2024-12-10 04:08:59.212983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 Write completed with error (sct=0, sc=8) 00:22:00.295 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 [2024-12-10 04:08:59.213794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 [2024-12-10 04:08:59.214842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.296 starting I/O failed: -6 00:22:00.296 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 [2024-12-10 04:08:59.218366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:00.297 NVMe io qpair process completion error 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.297 starting I/O failed: -6 00:22:00.297 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 [2024-12-10 04:08:59.223207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 [2024-12-10 04:08:59.224085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.298 starting I/O failed: -6 00:22:00.298 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 [2024-12-10 04:08:59.225121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 [2024-12-10 04:08:59.226673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:00.299 NVMe io qpair process completion error 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 starting I/O failed: -6 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.299 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 [2024-12-10 04:08:59.227682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 [2024-12-10 04:08:59.228457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 [2024-12-10 04:08:59.229518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.300 starting I/O failed: -6 00:22:00.300 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 [2024-12-10 04:08:59.235206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.301 NVMe io qpair process completion error 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 [2024-12-10 04:08:59.236293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:00.301 starting I/O failed: -6 00:22:00.301 starting I/O failed: -6 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 [2024-12-10 04:08:59.237225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.301 starting I/O failed: -6 00:22:00.301 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 [2024-12-10 04:08:59.238280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 starting I/O failed: -6 00:22:00.302 [2024-12-10 04:08:59.242024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.302 NVMe io qpair process completion error 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.302 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Write completed with error (sct=0, sc=8) 00:22:00.303 Initializing NVMe Controllers 00:22:00.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:00.303 Controller IO queue size 128, less than required. 00:22:00.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:00.303 Controller IO queue size 128, less than required. 00:22:00.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:00.303 Controller IO queue size 128, less than required. 00:22:00.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:00.303 Controller IO queue size 128, less than required. 00:22:00.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:00.303 Controller IO queue size 128, less than required. 00:22:00.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:00.303 Controller IO queue size 128, less than required. 00:22:00.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:00.303 Controller IO queue size 128, less than required. 00:22:00.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:00.303 Controller IO queue size 128, less than required. 00:22:00.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:00.303 Controller IO queue size 128, less than required. 00:22:00.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:00.303 Controller IO queue size 128, less than required. 00:22:00.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:00.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:00.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:00.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:00.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:00.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:00.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:00.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:00.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:00.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:00.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:00.303 Initialization complete. Launching workers. 00:22:00.303 ======================================================== 00:22:00.303 Latency(us) 00:22:00.303 Device Information : IOPS MiB/s Average min max 00:22:00.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2179.06 93.63 58748.70 880.51 110055.60 00:22:00.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2189.52 94.08 58478.39 742.18 111630.47 00:22:00.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2197.16 94.41 58341.15 672.37 117434.78 00:22:00.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2190.40 94.12 57872.52 947.91 106226.98 00:22:00.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2201.30 94.59 58068.33 625.29 118772.71 00:22:00.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2212.20 95.06 57313.26 621.12 103712.59 00:22:00.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2205.22 94.76 57503.87 920.84 101661.85 00:22:00.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2201.73 94.61 57609.54 860.30 99650.17 00:22:00.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2147.44 92.27 59104.15 723.31 103829.72 00:22:00.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2196.28 94.37 57814.19 1155.79 108938.85 00:22:00.303 ======================================================== 00:22:00.303 Total : 21920.31 941.89 58081.65 621.12 118772.71 00:22:00.303 00:22:00.303 [2024-12-10 04:08:59.247301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1521890 is same with the state(6) to be set 00:22:00.303 [2024-12-10 04:08:59.247352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1521bc0 is same with the state(6) to be set 00:22:00.303 [2024-12-10 04:08:59.247383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1522a70 is same with the state(6) to be set 00:22:00.303 [2024-12-10 04:08:59.247412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1523720 is same with the state(6) to be set 00:22:00.303 [2024-12-10 04:08:59.247442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1522410 is same with the state(6) to be set 00:22:00.303 [2024-12-10 04:08:59.247470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1521560 is same with the state(6) to be set 00:22:00.303 [2024-12-10 04:08:59.247497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1522740 is same with the state(6) to be set 00:22:00.303 [2024-12-10 04:08:59.247524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1521ef0 is same with the state(6) to be set 00:22:00.303 [2024-12-10 04:08:59.247553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1523ae0 is same with the state(6) to be set 00:22:00.303 [2024-12-10 04:08:59.247582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1523900 is same with the state(6) to be set 00:22:00.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:00.303 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:01.681 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 118455 00:22:01.681 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:01.681 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 118455 00:22:01.681 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:01.681 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.681 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 118455 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:01.682 rmmod nvme_tcp 00:22:01.682 rmmod nvme_fabrics 00:22:01.682 rmmod nvme_keyring 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 118179 ']' 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 118179 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 118179 ']' 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 118179 00:22:01.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (118179) - No such process 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 118179 is not found' 00:22:01.682 Process with pid 118179 is not found 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.682 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.587 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:03.587 00:22:03.587 real 0m10.418s 00:22:03.587 user 0m27.572s 00:22:03.587 sys 0m5.273s 00:22:03.587 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:03.587 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:03.587 ************************************ 00:22:03.587 END TEST nvmf_shutdown_tc4 00:22:03.587 ************************************ 00:22:03.587 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:03.587 00:22:03.587 real 0m42.023s 00:22:03.587 user 1m45.595s 00:22:03.587 sys 0m14.072s 00:22:03.587 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:03.587 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:03.587 ************************************ 00:22:03.587 END TEST nvmf_shutdown 00:22:03.587 ************************************ 00:22:03.587 04:09:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:03.587 04:09:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:03.587 04:09:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.587 04:09:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:03.587 ************************************ 00:22:03.587 START TEST nvmf_nsid 00:22:03.587 ************************************ 00:22:03.587 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:03.847 * Looking for test storage... 00:22:03.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:03.847 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:03.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.847 --rc genhtml_branch_coverage=1 00:22:03.847 --rc genhtml_function_coverage=1 00:22:03.847 --rc genhtml_legend=1 00:22:03.847 --rc geninfo_all_blocks=1 00:22:03.847 --rc geninfo_unexecuted_blocks=1 00:22:03.847 00:22:03.847 ' 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:03.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.847 --rc genhtml_branch_coverage=1 00:22:03.847 --rc genhtml_function_coverage=1 00:22:03.847 --rc genhtml_legend=1 00:22:03.847 --rc geninfo_all_blocks=1 00:22:03.847 --rc geninfo_unexecuted_blocks=1 00:22:03.847 00:22:03.847 ' 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:03.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.847 --rc genhtml_branch_coverage=1 00:22:03.847 --rc genhtml_function_coverage=1 00:22:03.847 --rc genhtml_legend=1 00:22:03.847 --rc geninfo_all_blocks=1 00:22:03.847 --rc geninfo_unexecuted_blocks=1 00:22:03.847 00:22:03.847 ' 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:03.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.847 --rc genhtml_branch_coverage=1 00:22:03.847 --rc genhtml_function_coverage=1 00:22:03.847 --rc genhtml_legend=1 00:22:03.847 --rc geninfo_all_blocks=1 00:22:03.847 --rc geninfo_unexecuted_blocks=1 00:22:03.847 00:22:03.847 ' 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.847 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:03.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:03.848 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:10.415 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.415 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.415 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.415 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:10.416 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:10.416 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:10.416 Found net devices under 0000:af:00.0: cvl_0_0 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:10.416 Found net devices under 0000:af:00.1: cvl_0_1 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:22:10.416 00:22:10.416 --- 10.0.0.2 ping statistics --- 00:22:10.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.416 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:22:10.416 00:22:10.416 --- 10.0.0.1 ping statistics --- 00:22:10.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.416 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:10.416 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=123471 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 123471 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 123471 ']' 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.417 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:10.417 [2024-12-10 04:09:09.041101] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:10.417 [2024-12-10 04:09:09.041146] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.417 [2024-12-10 04:09:09.120638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.417 [2024-12-10 04:09:09.162078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.417 [2024-12-10 04:09:09.162109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.417 [2024-12-10 04:09:09.162116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.417 [2024-12-10 04:09:09.162122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.417 [2024-12-10 04:09:09.162127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.417 [2024-12-10 04:09:09.162625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=123579 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=cd898757-5f60-4d07-a6c5-6bdb5954e350 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=952e1a4d-7b3f-46ed-bfd8-8a576f6bb940 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=38e4d914-a169-416a-9d0c-7fa0031b4e8e 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:10.417 null0 00:22:10.417 null1 00:22:10.417 [2024-12-10 04:09:09.345371] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:10.417 [2024-12-10 04:09:09.345416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123579 ] 00:22:10.417 null2 00:22:10.417 [2024-12-10 04:09:09.351536] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.417 [2024-12-10 04:09:09.375726] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 123579 /var/tmp/tgt2.sock 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 123579 ']' 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:10.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:10.417 [2024-12-10 04:09:09.416810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.417 [2024-12-10 04:09:09.456233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:10.417 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:10.984 [2024-12-10 04:09:09.978818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.984 [2024-12-10 04:09:09.994909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:10.984 nvme0n1 nvme0n2 00:22:10.984 nvme1n1 00:22:10.984 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:10.984 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:10.984 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:11.919 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:12.855 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:12.855 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:12.855 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:12.855 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid cd898757-5f60-4d07-a6c5-6bdb5954e350 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cd8987575f604d07a6c56bdb5954e350 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CD8987575F604D07A6C56BDB5954E350 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CD8987575F604D07A6C56BDB5954E350 == \C\D\8\9\8\7\5\7\5\F\6\0\4\D\0\7\A\6\C\5\6\B\D\B\5\9\5\4\E\3\5\0 ]] 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 952e1a4d-7b3f-46ed-bfd8-8a576f6bb940 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=952e1a4d7b3f46edbfd88a576f6bb940 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 952E1A4D7B3F46EDBFD88A576F6BB940 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 952E1A4D7B3F46EDBFD88A576F6BB940 == \9\5\2\E\1\A\4\D\7\B\3\F\4\6\E\D\B\F\D\8\8\A\5\7\6\F\6\B\B\9\4\0 ]] 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:13.113 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 38e4d914-a169-416a-9d0c-7fa0031b4e8e 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=38e4d914a169416a9d0c7fa0031b4e8e 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 38E4D914A169416A9D0C7FA0031B4E8E 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 38E4D914A169416A9D0C7FA0031B4E8E == \3\8\E\4\D\9\1\4\A\1\6\9\4\1\6\A\9\D\0\C\7\F\A\0\0\3\1\B\4\E\8\E ]] 00:22:13.114 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 123579 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 123579 ']' 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 123579 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123579 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123579' 00:22:13.373 killing process with pid 123579 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 123579 00:22:13.373 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 123579 00:22:13.632 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:13.632 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.632 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:13.632 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:13.632 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:13.632 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.632 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:13.632 rmmod nvme_tcp 00:22:13.891 rmmod nvme_fabrics 00:22:13.891 rmmod nvme_keyring 00:22:13.891 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.891 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:13.891 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:13.891 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 123471 ']' 00:22:13.891 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 123471 00:22:13.891 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 123471 ']' 00:22:13.891 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 123471 00:22:13.891 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:13.891 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.891 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123471 00:22:13.891 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:13.891 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:13.892 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123471' 00:22:13.892 killing process with pid 123471 00:22:13.892 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 123471 00:22:13.892 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 123471 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.151 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.057 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:16.057 00:22:16.057 real 0m12.429s 00:22:16.057 user 0m9.718s 00:22:16.057 sys 0m5.467s 00:22:16.057 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.057 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:16.057 ************************************ 00:22:16.057 END TEST nvmf_nsid 00:22:16.057 ************************************ 00:22:16.057 04:09:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:16.057 00:22:16.057 real 11m57.316s 00:22:16.057 user 25m32.964s 00:22:16.057 sys 3m43.681s 00:22:16.057 04:09:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.057 04:09:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:16.057 ************************************ 00:22:16.057 END TEST nvmf_target_extra 00:22:16.057 ************************************ 00:22:16.317 04:09:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:16.317 04:09:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:16.317 04:09:15 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.317 04:09:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.317 ************************************ 00:22:16.317 START TEST nvmf_host 00:22:16.317 ************************************ 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:16.317 * Looking for test storage... 00:22:16.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:16.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.317 --rc genhtml_branch_coverage=1 00:22:16.317 --rc genhtml_function_coverage=1 00:22:16.317 --rc genhtml_legend=1 00:22:16.317 --rc geninfo_all_blocks=1 00:22:16.317 --rc geninfo_unexecuted_blocks=1 00:22:16.317 00:22:16.317 ' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:16.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.317 --rc genhtml_branch_coverage=1 00:22:16.317 --rc genhtml_function_coverage=1 00:22:16.317 --rc genhtml_legend=1 00:22:16.317 --rc geninfo_all_blocks=1 00:22:16.317 --rc geninfo_unexecuted_blocks=1 00:22:16.317 00:22:16.317 ' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:16.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.317 --rc genhtml_branch_coverage=1 00:22:16.317 --rc genhtml_function_coverage=1 00:22:16.317 --rc genhtml_legend=1 00:22:16.317 --rc geninfo_all_blocks=1 00:22:16.317 --rc geninfo_unexecuted_blocks=1 00:22:16.317 00:22:16.317 ' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:16.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.317 --rc genhtml_branch_coverage=1 00:22:16.317 --rc genhtml_function_coverage=1 00:22:16.317 --rc genhtml_legend=1 00:22:16.317 --rc geninfo_all_blocks=1 00:22:16.317 --rc geninfo_unexecuted_blocks=1 00:22:16.317 00:22:16.317 ' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:16.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.317 04:09:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.578 ************************************ 00:22:16.578 START TEST nvmf_multicontroller 00:22:16.578 ************************************ 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:16.578 * Looking for test storage... 00:22:16.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:16.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.578 --rc genhtml_branch_coverage=1 00:22:16.578 --rc genhtml_function_coverage=1 00:22:16.578 --rc genhtml_legend=1 00:22:16.578 --rc geninfo_all_blocks=1 00:22:16.578 --rc geninfo_unexecuted_blocks=1 00:22:16.578 00:22:16.578 ' 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:16.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.578 --rc genhtml_branch_coverage=1 00:22:16.578 --rc genhtml_function_coverage=1 00:22:16.578 --rc genhtml_legend=1 00:22:16.578 --rc geninfo_all_blocks=1 00:22:16.578 --rc geninfo_unexecuted_blocks=1 00:22:16.578 00:22:16.578 ' 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:16.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.578 --rc genhtml_branch_coverage=1 00:22:16.578 --rc genhtml_function_coverage=1 00:22:16.578 --rc genhtml_legend=1 00:22:16.578 --rc geninfo_all_blocks=1 00:22:16.578 --rc geninfo_unexecuted_blocks=1 00:22:16.578 00:22:16.578 ' 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:16.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.578 --rc genhtml_branch_coverage=1 00:22:16.578 --rc genhtml_function_coverage=1 00:22:16.578 --rc genhtml_legend=1 00:22:16.578 --rc geninfo_all_blocks=1 00:22:16.578 --rc geninfo_unexecuted_blocks=1 00:22:16.578 00:22:16.578 ' 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.578 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:16.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:16.579 04:09:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:23.151 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:23.151 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:23.151 Found net devices under 0000:af:00.0: cvl_0_0 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:23.151 Found net devices under 0000:af:00.1: cvl_0_1 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.151 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:22:23.152 00:22:23.152 --- 10.0.0.2 ping statistics --- 00:22:23.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.152 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:22:23.152 00:22:23.152 --- 10.0.0.1 ping statistics --- 00:22:23.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.152 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=127745 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 127745 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 127745 ']' 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.152 04:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.152 [2024-12-10 04:09:21.790256] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:23.152 [2024-12-10 04:09:21.790303] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.152 [2024-12-10 04:09:21.869671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:23.152 [2024-12-10 04:09:21.910321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.152 [2024-12-10 04:09:21.910357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.152 [2024-12-10 04:09:21.910366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.152 [2024-12-10 04:09:21.910372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.152 [2024-12-10 04:09:21.910376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.152 [2024-12-10 04:09:21.911625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.152 [2024-12-10 04:09:21.911729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.152 [2024-12-10 04:09:21.911731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.152 [2024-12-10 04:09:22.042992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.152 Malloc0 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.152 [2024-12-10 04:09:22.110741] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.152 [2024-12-10 04:09:22.118678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.152 Malloc1 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.152 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=127847 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 127847 /var/tmp/bdevperf.sock 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 127847 ']' 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.153 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.412 NVMe0n1 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.412 1 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.412 request: 00:22:23.412 { 00:22:23.412 "name": "NVMe0", 00:22:23.412 "trtype": "tcp", 00:22:23.412 "traddr": "10.0.0.2", 00:22:23.412 "adrfam": "ipv4", 00:22:23.412 "trsvcid": "4420", 00:22:23.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.412 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:23.412 "hostaddr": "10.0.0.1", 00:22:23.412 "prchk_reftag": false, 00:22:23.412 "prchk_guard": false, 00:22:23.412 "hdgst": false, 00:22:23.412 "ddgst": false, 00:22:23.412 "allow_unrecognized_csi": false, 00:22:23.412 "method": "bdev_nvme_attach_controller", 00:22:23.412 "req_id": 1 00:22:23.412 } 00:22:23.412 Got JSON-RPC error response 00:22:23.412 response: 00:22:23.412 { 00:22:23.412 "code": -114, 00:22:23.412 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:23.412 } 00:22:23.412 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.413 request: 00:22:23.413 { 00:22:23.413 "name": "NVMe0", 00:22:23.413 "trtype": "tcp", 00:22:23.413 "traddr": "10.0.0.2", 00:22:23.413 "adrfam": "ipv4", 00:22:23.413 "trsvcid": "4420", 00:22:23.413 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:23.413 "hostaddr": "10.0.0.1", 00:22:23.413 "prchk_reftag": false, 00:22:23.413 "prchk_guard": false, 00:22:23.413 "hdgst": false, 00:22:23.413 "ddgst": false, 00:22:23.413 "allow_unrecognized_csi": false, 00:22:23.413 "method": "bdev_nvme_attach_controller", 00:22:23.413 "req_id": 1 00:22:23.413 } 00:22:23.413 Got JSON-RPC error response 00:22:23.413 response: 00:22:23.413 { 00:22:23.413 "code": -114, 00:22:23.413 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:23.413 } 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.413 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.672 request: 00:22:23.672 { 00:22:23.672 "name": "NVMe0", 00:22:23.672 "trtype": "tcp", 00:22:23.672 "traddr": "10.0.0.2", 00:22:23.672 "adrfam": "ipv4", 00:22:23.672 "trsvcid": "4420", 00:22:23.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.672 "hostaddr": "10.0.0.1", 00:22:23.672 "prchk_reftag": false, 00:22:23.672 "prchk_guard": false, 00:22:23.672 "hdgst": false, 00:22:23.672 "ddgst": false, 00:22:23.672 "multipath": "disable", 00:22:23.672 "allow_unrecognized_csi": false, 00:22:23.672 "method": "bdev_nvme_attach_controller", 00:22:23.672 "req_id": 1 00:22:23.672 } 00:22:23.672 Got JSON-RPC error response 00:22:23.672 response: 00:22:23.672 { 00:22:23.672 "code": -114, 00:22:23.672 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:23.672 } 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.672 request: 00:22:23.672 { 00:22:23.672 "name": "NVMe0", 00:22:23.672 "trtype": "tcp", 00:22:23.672 "traddr": "10.0.0.2", 00:22:23.672 "adrfam": "ipv4", 00:22:23.672 "trsvcid": "4420", 00:22:23.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.672 "hostaddr": "10.0.0.1", 00:22:23.672 "prchk_reftag": false, 00:22:23.672 "prchk_guard": false, 00:22:23.672 "hdgst": false, 00:22:23.672 "ddgst": false, 00:22:23.672 "multipath": "failover", 00:22:23.672 "allow_unrecognized_csi": false, 00:22:23.672 "method": "bdev_nvme_attach_controller", 00:22:23.672 "req_id": 1 00:22:23.672 } 00:22:23.672 Got JSON-RPC error response 00:22:23.672 response: 00:22:23.672 { 00:22:23.672 "code": -114, 00:22:23.672 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:23.672 } 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.672 NVMe0n1 00:22:23.672 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.673 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:23.673 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.673 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.673 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.673 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:23.673 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.673 04:09:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.932 00:22:23.932 04:09:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.932 04:09:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:23.932 04:09:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:23.932 04:09:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.932 04:09:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.932 04:09:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.932 04:09:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:23.932 04:09:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:25.310 { 00:22:25.310 "results": [ 00:22:25.310 { 00:22:25.310 "job": "NVMe0n1", 00:22:25.310 "core_mask": "0x1", 00:22:25.310 "workload": "write", 00:22:25.310 "status": "finished", 00:22:25.310 "queue_depth": 128, 00:22:25.310 "io_size": 4096, 00:22:25.310 "runtime": 1.007241, 00:22:25.310 "iops": 25219.386422911695, 00:22:25.310 "mibps": 98.51322821449881, 00:22:25.310 "io_failed": 0, 00:22:25.310 "io_timeout": 0, 00:22:25.310 "avg_latency_us": 5069.347357275955, 00:22:25.310 "min_latency_us": 2949.12, 00:22:25.310 "max_latency_us": 8862.96380952381 00:22:25.310 } 00:22:25.310 ], 00:22:25.310 "core_count": 1 00:22:25.310 } 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 127847 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 127847 ']' 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 127847 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127847 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127847' 00:22:25.310 killing process with pid 127847 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 127847 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 127847 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:25.310 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:25.310 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:25.310 [2024-12-10 04:09:22.222442] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:25.310 [2024-12-10 04:09:22.222493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127847 ] 00:22:25.310 [2024-12-10 04:09:22.295679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.310 [2024-12-10 04:09:22.335074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.310 [2024-12-10 04:09:23.034007] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 9fc3b059-4536-4f5a-af35-d928e5355341 already exists 00:22:25.310 [2024-12-10 04:09:23.034035] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:9fc3b059-4536-4f5a-af35-d928e5355341 alias for bdev NVMe1n1 00:22:25.310 [2024-12-10 04:09:23.034043] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:25.310 Running I/O for 1 seconds... 00:22:25.310 25147.00 IOPS, 98.23 MiB/s 00:22:25.310 Latency(us) 00:22:25.310 [2024-12-10T03:09:24.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.310 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:25.310 NVMe0n1 : 1.01 25219.39 98.51 0.00 0.00 5069.35 2949.12 8862.96 00:22:25.310 [2024-12-10T03:09:24.596Z] =================================================================================================================== 00:22:25.310 [2024-12-10T03:09:24.596Z] Total : 25219.39 98.51 0.00 0.00 5069.35 2949.12 8862.96 00:22:25.310 Received shutdown signal, test time was about 1.000000 seconds 00:22:25.310 00:22:25.310 Latency(us) 00:22:25.310 [2024-12-10T03:09:24.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.311 [2024-12-10T03:09:24.597Z] =================================================================================================================== 00:22:25.311 [2024-12-10T03:09:24.597Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.311 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:25.311 rmmod nvme_tcp 00:22:25.311 rmmod nvme_fabrics 00:22:25.311 rmmod nvme_keyring 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 127745 ']' 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 127745 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 127745 ']' 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 127745 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127745 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127745' 00:22:25.311 killing process with pid 127745 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 127745 00:22:25.311 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 127745 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.570 04:09:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.107 04:09:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:28.107 00:22:28.107 real 0m11.230s 00:22:28.107 user 0m12.652s 00:22:28.107 sys 0m5.143s 00:22:28.107 04:09:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.107 04:09:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.107 ************************************ 00:22:28.107 END TEST nvmf_multicontroller 00:22:28.107 ************************************ 00:22:28.107 04:09:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:28.107 04:09:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:28.107 04:09:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.107 04:09:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.107 ************************************ 00:22:28.107 START TEST nvmf_aer 00:22:28.107 ************************************ 00:22:28.107 04:09:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:28.107 * Looking for test storage... 00:22:28.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:28.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.107 --rc genhtml_branch_coverage=1 00:22:28.107 --rc genhtml_function_coverage=1 00:22:28.107 --rc genhtml_legend=1 00:22:28.107 --rc geninfo_all_blocks=1 00:22:28.107 --rc geninfo_unexecuted_blocks=1 00:22:28.107 00:22:28.107 ' 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:28.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.107 --rc genhtml_branch_coverage=1 00:22:28.107 --rc genhtml_function_coverage=1 00:22:28.107 --rc genhtml_legend=1 00:22:28.107 --rc geninfo_all_blocks=1 00:22:28.107 --rc geninfo_unexecuted_blocks=1 00:22:28.107 00:22:28.107 ' 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:28.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.107 --rc genhtml_branch_coverage=1 00:22:28.107 --rc genhtml_function_coverage=1 00:22:28.107 --rc genhtml_legend=1 00:22:28.107 --rc geninfo_all_blocks=1 00:22:28.107 --rc geninfo_unexecuted_blocks=1 00:22:28.107 00:22:28.107 ' 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:28.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.107 --rc genhtml_branch_coverage=1 00:22:28.107 --rc genhtml_function_coverage=1 00:22:28.107 --rc genhtml_legend=1 00:22:28.107 --rc geninfo_all_blocks=1 00:22:28.107 --rc geninfo_unexecuted_blocks=1 00:22:28.107 00:22:28.107 ' 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.107 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:28.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:28.108 04:09:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:33.501 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:33.501 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:33.501 Found net devices under 0000:af:00.0: cvl_0_0 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:33.501 Found net devices under 0000:af:00.1: cvl_0_1 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.501 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.835 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.835 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.835 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.835 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.835 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.835 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.835 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.835 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:22:33.835 00:22:33.835 --- 10.0.0.2 ping statistics --- 00:22:33.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.835 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:22:33.836 00:22:33.836 --- 10.0.0.1 ping statistics --- 00:22:33.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.836 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=131677 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 131677 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 131677 ']' 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.836 04:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.836 [2024-12-10 04:09:33.045397] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:33.836 [2024-12-10 04:09:33.045440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.145 [2024-12-10 04:09:33.129063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.145 [2024-12-10 04:09:33.171952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.145 [2024-12-10 04:09:33.171987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.145 [2024-12-10 04:09:33.171996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.145 [2024-12-10 04:09:33.172002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.145 [2024-12-10 04:09:33.172009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.145 [2024-12-10 04:09:33.173433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.145 [2024-12-10 04:09:33.173463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.145 [2024-12-10 04:09:33.173569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.145 [2024-12-10 04:09:33.173570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.145 [2024-12-10 04:09:33.318376] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.145 Malloc0 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.145 [2024-12-10 04:09:33.376368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.145 [ 00:22:34.145 { 00:22:34.145 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:34.145 "subtype": "Discovery", 00:22:34.145 "listen_addresses": [], 00:22:34.145 "allow_any_host": true, 00:22:34.145 "hosts": [] 00:22:34.145 }, 00:22:34.145 { 00:22:34.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.145 "subtype": "NVMe", 00:22:34.145 "listen_addresses": [ 00:22:34.145 { 00:22:34.145 "trtype": "TCP", 00:22:34.145 "adrfam": "IPv4", 00:22:34.145 "traddr": "10.0.0.2", 00:22:34.145 "trsvcid": "4420" 00:22:34.145 } 00:22:34.145 ], 00:22:34.145 "allow_any_host": true, 00:22:34.145 "hosts": [], 00:22:34.145 "serial_number": "SPDK00000000000001", 00:22:34.145 "model_number": "SPDK bdev Controller", 00:22:34.145 "max_namespaces": 2, 00:22:34.145 "min_cntlid": 1, 00:22:34.145 "max_cntlid": 65519, 00:22:34.145 "namespaces": [ 00:22:34.145 { 00:22:34.145 "nsid": 1, 00:22:34.145 "bdev_name": "Malloc0", 00:22:34.145 "name": "Malloc0", 00:22:34.145 "nguid": "2CAFAF18887C45C5A1D600A28990A2CA", 00:22:34.145 "uuid": "2cafaf18-887c-45c5-a1d6-00a28990a2ca" 00:22:34.145 } 00:22:34.145 ] 00:22:34.145 } 00:22:34.145 ] 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=131804 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:34.145 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:34.404 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:34.404 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:34.404 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:34.404 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:34.404 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:34.404 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:22:34.404 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:22:34.404 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.663 Malloc1 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.663 Asynchronous Event Request test 00:22:34.663 Attaching to 10.0.0.2 00:22:34.663 Attached to 10.0.0.2 00:22:34.663 Registering asynchronous event callbacks... 00:22:34.663 Starting namespace attribute notice tests for all controllers... 00:22:34.663 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:34.663 aer_cb - Changed Namespace 00:22:34.663 Cleaning up... 00:22:34.663 [ 00:22:34.663 { 00:22:34.663 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:34.663 "subtype": "Discovery", 00:22:34.663 "listen_addresses": [], 00:22:34.663 "allow_any_host": true, 00:22:34.663 "hosts": [] 00:22:34.663 }, 00:22:34.663 { 00:22:34.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.663 "subtype": "NVMe", 00:22:34.663 "listen_addresses": [ 00:22:34.663 { 00:22:34.663 "trtype": "TCP", 00:22:34.663 "adrfam": "IPv4", 00:22:34.663 "traddr": "10.0.0.2", 00:22:34.663 "trsvcid": "4420" 00:22:34.663 } 00:22:34.663 ], 00:22:34.663 "allow_any_host": true, 00:22:34.663 "hosts": [], 00:22:34.663 "serial_number": "SPDK00000000000001", 00:22:34.663 "model_number": "SPDK bdev Controller", 00:22:34.663 "max_namespaces": 2, 00:22:34.663 "min_cntlid": 1, 00:22:34.663 "max_cntlid": 65519, 00:22:34.663 "namespaces": [ 00:22:34.663 { 00:22:34.663 "nsid": 1, 00:22:34.663 "bdev_name": "Malloc0", 00:22:34.663 "name": "Malloc0", 00:22:34.663 "nguid": "2CAFAF18887C45C5A1D600A28990A2CA", 00:22:34.663 "uuid": "2cafaf18-887c-45c5-a1d6-00a28990a2ca" 00:22:34.663 }, 00:22:34.663 { 00:22:34.663 "nsid": 2, 00:22:34.663 "bdev_name": "Malloc1", 00:22:34.663 "name": "Malloc1", 00:22:34.663 "nguid": "2AFE26DEA25942D6AF77D8F904D0742C", 00:22:34.663 "uuid": "2afe26de-a259-42d6-af77-d8f904d0742c" 00:22:34.663 } 00:22:34.663 ] 00:22:34.663 } 00:22:34.663 ] 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 131804 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.663 rmmod nvme_tcp 00:22:34.663 rmmod nvme_fabrics 00:22:34.663 rmmod nvme_keyring 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 131677 ']' 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 131677 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 131677 ']' 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 131677 00:22:34.663 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:34.664 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.664 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 131677 00:22:34.923 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.923 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.923 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 131677' 00:22:34.923 killing process with pid 131677 00:22:34.923 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 131677 00:22:34.923 04:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 131677 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.923 04:09:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.459 00:22:37.459 real 0m9.279s 00:22:37.459 user 0m5.549s 00:22:37.459 sys 0m4.820s 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:37.459 ************************************ 00:22:37.459 END TEST nvmf_aer 00:22:37.459 ************************************ 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.459 ************************************ 00:22:37.459 START TEST nvmf_async_init 00:22:37.459 ************************************ 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:37.459 * Looking for test storage... 00:22:37.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:37.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.459 --rc genhtml_branch_coverage=1 00:22:37.459 --rc genhtml_function_coverage=1 00:22:37.459 --rc genhtml_legend=1 00:22:37.459 --rc geninfo_all_blocks=1 00:22:37.459 --rc geninfo_unexecuted_blocks=1 00:22:37.459 00:22:37.459 ' 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:37.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.459 --rc genhtml_branch_coverage=1 00:22:37.459 --rc genhtml_function_coverage=1 00:22:37.459 --rc genhtml_legend=1 00:22:37.459 --rc geninfo_all_blocks=1 00:22:37.459 --rc geninfo_unexecuted_blocks=1 00:22:37.459 00:22:37.459 ' 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:37.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.459 --rc genhtml_branch_coverage=1 00:22:37.459 --rc genhtml_function_coverage=1 00:22:37.459 --rc genhtml_legend=1 00:22:37.459 --rc geninfo_all_blocks=1 00:22:37.459 --rc geninfo_unexecuted_blocks=1 00:22:37.459 00:22:37.459 ' 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:37.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.459 --rc genhtml_branch_coverage=1 00:22:37.459 --rc genhtml_function_coverage=1 00:22:37.459 --rc genhtml_legend=1 00:22:37.459 --rc geninfo_all_blocks=1 00:22:37.459 --rc geninfo_unexecuted_blocks=1 00:22:37.459 00:22:37.459 ' 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.459 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2b4b690f43154b6096c051bf98c69ae1 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.460 04:09:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:44.029 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:44.029 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.029 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:44.030 Found net devices under 0000:af:00.0: cvl_0_0 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:44.030 Found net devices under 0000:af:00.1: cvl_0_1 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:44.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:22:44.030 00:22:44.030 --- 10.0.0.2 ping statistics --- 00:22:44.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.030 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:22:44.030 00:22:44.030 --- 10.0.0.1 ping statistics --- 00:22:44.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.030 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=135272 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 135272 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 135272 ']' 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.030 [2024-12-10 04:09:42.426288] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:44.030 [2024-12-10 04:09:42.426336] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.030 [2024-12-10 04:09:42.505946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.030 [2024-12-10 04:09:42.546547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.030 [2024-12-10 04:09:42.546584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.030 [2024-12-10 04:09:42.546591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.030 [2024-12-10 04:09:42.546598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.030 [2024-12-10 04:09:42.546604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.030 [2024-12-10 04:09:42.547071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.030 [2024-12-10 04:09:42.682660] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.030 null0 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2b4b690f43154b6096c051bf98c69ae1 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.030 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.031 [2024-12-10 04:09:42.734945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.031 nvme0n1 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.031 [ 00:22:44.031 { 00:22:44.031 "name": "nvme0n1", 00:22:44.031 "aliases": [ 00:22:44.031 "2b4b690f-4315-4b60-96c0-51bf98c69ae1" 00:22:44.031 ], 00:22:44.031 "product_name": "NVMe disk", 00:22:44.031 "block_size": 512, 00:22:44.031 "num_blocks": 2097152, 00:22:44.031 "uuid": "2b4b690f-4315-4b60-96c0-51bf98c69ae1", 00:22:44.031 "numa_id": 1, 00:22:44.031 "assigned_rate_limits": { 00:22:44.031 "rw_ios_per_sec": 0, 00:22:44.031 "rw_mbytes_per_sec": 0, 00:22:44.031 "r_mbytes_per_sec": 0, 00:22:44.031 "w_mbytes_per_sec": 0 00:22:44.031 }, 00:22:44.031 "claimed": false, 00:22:44.031 "zoned": false, 00:22:44.031 "supported_io_types": { 00:22:44.031 "read": true, 00:22:44.031 "write": true, 00:22:44.031 "unmap": false, 00:22:44.031 "flush": true, 00:22:44.031 "reset": true, 00:22:44.031 "nvme_admin": true, 00:22:44.031 "nvme_io": true, 00:22:44.031 "nvme_io_md": false, 00:22:44.031 "write_zeroes": true, 00:22:44.031 "zcopy": false, 00:22:44.031 "get_zone_info": false, 00:22:44.031 "zone_management": false, 00:22:44.031 "zone_append": false, 00:22:44.031 "compare": true, 00:22:44.031 "compare_and_write": true, 00:22:44.031 "abort": true, 00:22:44.031 "seek_hole": false, 00:22:44.031 "seek_data": false, 00:22:44.031 "copy": true, 00:22:44.031 "nvme_iov_md": false 00:22:44.031 }, 00:22:44.031 "memory_domains": [ 00:22:44.031 { 00:22:44.031 "dma_device_id": "system", 00:22:44.031 "dma_device_type": 1 00:22:44.031 } 00:22:44.031 ], 00:22:44.031 "driver_specific": { 00:22:44.031 "nvme": [ 00:22:44.031 { 00:22:44.031 "trid": { 00:22:44.031 "trtype": "TCP", 00:22:44.031 "adrfam": "IPv4", 00:22:44.031 "traddr": "10.0.0.2", 00:22:44.031 "trsvcid": "4420", 00:22:44.031 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:44.031 }, 00:22:44.031 "ctrlr_data": { 00:22:44.031 "cntlid": 1, 00:22:44.031 "vendor_id": "0x8086", 00:22:44.031 "model_number": "SPDK bdev Controller", 00:22:44.031 "serial_number": "00000000000000000000", 00:22:44.031 "firmware_revision": "25.01", 00:22:44.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:44.031 "oacs": { 00:22:44.031 "security": 0, 00:22:44.031 "format": 0, 00:22:44.031 "firmware": 0, 00:22:44.031 "ns_manage": 0 00:22:44.031 }, 00:22:44.031 "multi_ctrlr": true, 00:22:44.031 "ana_reporting": false 00:22:44.031 }, 00:22:44.031 "vs": { 00:22:44.031 "nvme_version": "1.3" 00:22:44.031 }, 00:22:44.031 "ns_data": { 00:22:44.031 "id": 1, 00:22:44.031 "can_share": true 00:22:44.031 } 00:22:44.031 } 00:22:44.031 ], 00:22:44.031 "mp_policy": "active_passive" 00:22:44.031 } 00:22:44.031 } 00:22:44.031 ] 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.031 04:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.031 [2024-12-10 04:09:43.003484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:44.031 [2024-12-10 04:09:43.003542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2338250 (9): Bad file descriptor 00:22:44.031 [2024-12-10 04:09:43.135257] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.031 [ 00:22:44.031 { 00:22:44.031 "name": "nvme0n1", 00:22:44.031 "aliases": [ 00:22:44.031 "2b4b690f-4315-4b60-96c0-51bf98c69ae1" 00:22:44.031 ], 00:22:44.031 "product_name": "NVMe disk", 00:22:44.031 "block_size": 512, 00:22:44.031 "num_blocks": 2097152, 00:22:44.031 "uuid": "2b4b690f-4315-4b60-96c0-51bf98c69ae1", 00:22:44.031 "numa_id": 1, 00:22:44.031 "assigned_rate_limits": { 00:22:44.031 "rw_ios_per_sec": 0, 00:22:44.031 "rw_mbytes_per_sec": 0, 00:22:44.031 "r_mbytes_per_sec": 0, 00:22:44.031 "w_mbytes_per_sec": 0 00:22:44.031 }, 00:22:44.031 "claimed": false, 00:22:44.031 "zoned": false, 00:22:44.031 "supported_io_types": { 00:22:44.031 "read": true, 00:22:44.031 "write": true, 00:22:44.031 "unmap": false, 00:22:44.031 "flush": true, 00:22:44.031 "reset": true, 00:22:44.031 "nvme_admin": true, 00:22:44.031 "nvme_io": true, 00:22:44.031 "nvme_io_md": false, 00:22:44.031 "write_zeroes": true, 00:22:44.031 "zcopy": false, 00:22:44.031 "get_zone_info": false, 00:22:44.031 "zone_management": false, 00:22:44.031 "zone_append": false, 00:22:44.031 "compare": true, 00:22:44.031 "compare_and_write": true, 00:22:44.031 "abort": true, 00:22:44.031 "seek_hole": false, 00:22:44.031 "seek_data": false, 00:22:44.031 "copy": true, 00:22:44.031 "nvme_iov_md": false 00:22:44.031 }, 00:22:44.031 "memory_domains": [ 00:22:44.031 { 00:22:44.031 "dma_device_id": "system", 00:22:44.031 "dma_device_type": 1 00:22:44.031 } 00:22:44.031 ], 00:22:44.031 "driver_specific": { 00:22:44.031 "nvme": [ 00:22:44.031 { 00:22:44.031 "trid": { 00:22:44.031 "trtype": "TCP", 00:22:44.031 "adrfam": "IPv4", 00:22:44.031 "traddr": "10.0.0.2", 00:22:44.031 "trsvcid": "4420", 00:22:44.031 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:44.031 }, 00:22:44.031 "ctrlr_data": { 00:22:44.031 "cntlid": 2, 00:22:44.031 "vendor_id": "0x8086", 00:22:44.031 "model_number": "SPDK bdev Controller", 00:22:44.031 "serial_number": "00000000000000000000", 00:22:44.031 "firmware_revision": "25.01", 00:22:44.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:44.031 "oacs": { 00:22:44.031 "security": 0, 00:22:44.031 "format": 0, 00:22:44.031 "firmware": 0, 00:22:44.031 "ns_manage": 0 00:22:44.031 }, 00:22:44.031 "multi_ctrlr": true, 00:22:44.031 "ana_reporting": false 00:22:44.031 }, 00:22:44.031 "vs": { 00:22:44.031 "nvme_version": "1.3" 00:22:44.031 }, 00:22:44.031 "ns_data": { 00:22:44.031 "id": 1, 00:22:44.031 "can_share": true 00:22:44.031 } 00:22:44.031 } 00:22:44.031 ], 00:22:44.031 "mp_policy": "active_passive" 00:22:44.031 } 00:22:44.031 } 00:22:44.031 ] 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.HZh7I2hSKN 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.HZh7I2hSKN 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.HZh7I2hSKN 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.031 [2024-12-10 04:09:43.212104] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.031 [2024-12-10 04:09:43.212214] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.031 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.032 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.032 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.032 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.032 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.032 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.032 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.032 [2024-12-10 04:09:43.232172] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:44.032 nvme0n1 00:22:44.032 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.032 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:44.032 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.032 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.032 [ 00:22:44.032 { 00:22:44.032 "name": "nvme0n1", 00:22:44.032 "aliases": [ 00:22:44.032 "2b4b690f-4315-4b60-96c0-51bf98c69ae1" 00:22:44.032 ], 00:22:44.032 "product_name": "NVMe disk", 00:22:44.032 "block_size": 512, 00:22:44.032 "num_blocks": 2097152, 00:22:44.032 "uuid": "2b4b690f-4315-4b60-96c0-51bf98c69ae1", 00:22:44.032 "numa_id": 1, 00:22:44.291 "assigned_rate_limits": { 00:22:44.291 "rw_ios_per_sec": 0, 00:22:44.291 "rw_mbytes_per_sec": 0, 00:22:44.291 "r_mbytes_per_sec": 0, 00:22:44.291 "w_mbytes_per_sec": 0 00:22:44.291 }, 00:22:44.291 "claimed": false, 00:22:44.291 "zoned": false, 00:22:44.291 "supported_io_types": { 00:22:44.291 "read": true, 00:22:44.291 "write": true, 00:22:44.291 "unmap": false, 00:22:44.291 "flush": true, 00:22:44.291 "reset": true, 00:22:44.291 "nvme_admin": true, 00:22:44.291 "nvme_io": true, 00:22:44.291 "nvme_io_md": false, 00:22:44.291 "write_zeroes": true, 00:22:44.291 "zcopy": false, 00:22:44.291 "get_zone_info": false, 00:22:44.291 "zone_management": false, 00:22:44.291 "zone_append": false, 00:22:44.291 "compare": true, 00:22:44.291 "compare_and_write": true, 00:22:44.291 "abort": true, 00:22:44.291 "seek_hole": false, 00:22:44.291 "seek_data": false, 00:22:44.291 "copy": true, 00:22:44.291 "nvme_iov_md": false 00:22:44.291 }, 00:22:44.291 "memory_domains": [ 00:22:44.291 { 00:22:44.291 "dma_device_id": "system", 00:22:44.291 "dma_device_type": 1 00:22:44.291 } 00:22:44.291 ], 00:22:44.291 "driver_specific": { 00:22:44.291 "nvme": [ 00:22:44.291 { 00:22:44.291 "trid": { 00:22:44.291 "trtype": "TCP", 00:22:44.291 "adrfam": "IPv4", 00:22:44.291 "traddr": "10.0.0.2", 00:22:44.291 "trsvcid": "4421", 00:22:44.291 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:44.291 }, 00:22:44.291 "ctrlr_data": { 00:22:44.291 "cntlid": 3, 00:22:44.291 "vendor_id": "0x8086", 00:22:44.291 "model_number": "SPDK bdev Controller", 00:22:44.291 "serial_number": "00000000000000000000", 00:22:44.291 "firmware_revision": "25.01", 00:22:44.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:44.291 "oacs": { 00:22:44.291 "security": 0, 00:22:44.291 "format": 0, 00:22:44.291 "firmware": 0, 00:22:44.291 "ns_manage": 0 00:22:44.291 }, 00:22:44.291 "multi_ctrlr": true, 00:22:44.291 "ana_reporting": false 00:22:44.291 }, 00:22:44.291 "vs": { 00:22:44.291 "nvme_version": "1.3" 00:22:44.291 }, 00:22:44.291 "ns_data": { 00:22:44.291 "id": 1, 00:22:44.291 "can_share": true 00:22:44.291 } 00:22:44.291 } 00:22:44.291 ], 00:22:44.291 "mp_policy": "active_passive" 00:22:44.291 } 00:22:44.291 } 00:22:44.291 ] 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.HZh7I2hSKN 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.291 rmmod nvme_tcp 00:22:44.291 rmmod nvme_fabrics 00:22:44.291 rmmod nvme_keyring 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 135272 ']' 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 135272 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 135272 ']' 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 135272 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 135272 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 135272' 00:22:44.291 killing process with pid 135272 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 135272 00:22:44.291 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 135272 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.550 04:09:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.462 04:09:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.462 00:22:46.462 real 0m9.426s 00:22:46.462 user 0m3.023s 00:22:46.462 sys 0m4.820s 00:22:46.462 04:09:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.462 04:09:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.462 ************************************ 00:22:46.462 END TEST nvmf_async_init 00:22:46.462 ************************************ 00:22:46.462 04:09:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:46.462 04:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.462 04:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.462 04:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.721 ************************************ 00:22:46.721 START TEST dma 00:22:46.721 ************************************ 00:22:46.721 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:46.721 * Looking for test storage... 00:22:46.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.721 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:46.721 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:46.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.722 --rc genhtml_branch_coverage=1 00:22:46.722 --rc genhtml_function_coverage=1 00:22:46.722 --rc genhtml_legend=1 00:22:46.722 --rc geninfo_all_blocks=1 00:22:46.722 --rc geninfo_unexecuted_blocks=1 00:22:46.722 00:22:46.722 ' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:46.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.722 --rc genhtml_branch_coverage=1 00:22:46.722 --rc genhtml_function_coverage=1 00:22:46.722 --rc genhtml_legend=1 00:22:46.722 --rc geninfo_all_blocks=1 00:22:46.722 --rc geninfo_unexecuted_blocks=1 00:22:46.722 00:22:46.722 ' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:46.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.722 --rc genhtml_branch_coverage=1 00:22:46.722 --rc genhtml_function_coverage=1 00:22:46.722 --rc genhtml_legend=1 00:22:46.722 --rc geninfo_all_blocks=1 00:22:46.722 --rc geninfo_unexecuted_blocks=1 00:22:46.722 00:22:46.722 ' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:46.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.722 --rc genhtml_branch_coverage=1 00:22:46.722 --rc genhtml_function_coverage=1 00:22:46.722 --rc genhtml_legend=1 00:22:46.722 --rc geninfo_all_blocks=1 00:22:46.722 --rc geninfo_unexecuted_blocks=1 00:22:46.722 00:22:46.722 ' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:46.722 00:22:46.722 real 0m0.202s 00:22:46.722 user 0m0.122s 00:22:46.722 sys 0m0.094s 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.722 04:09:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:46.722 ************************************ 00:22:46.722 END TEST dma 00:22:46.722 ************************************ 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.982 ************************************ 00:22:46.982 START TEST nvmf_identify 00:22:46.982 ************************************ 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:46.982 * Looking for test storage... 00:22:46.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.982 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:46.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.982 --rc genhtml_branch_coverage=1 00:22:46.982 --rc genhtml_function_coverage=1 00:22:46.982 --rc genhtml_legend=1 00:22:46.983 --rc geninfo_all_blocks=1 00:22:46.983 --rc geninfo_unexecuted_blocks=1 00:22:46.983 00:22:46.983 ' 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:46.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.983 --rc genhtml_branch_coverage=1 00:22:46.983 --rc genhtml_function_coverage=1 00:22:46.983 --rc genhtml_legend=1 00:22:46.983 --rc geninfo_all_blocks=1 00:22:46.983 --rc geninfo_unexecuted_blocks=1 00:22:46.983 00:22:46.983 ' 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:46.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.983 --rc genhtml_branch_coverage=1 00:22:46.983 --rc genhtml_function_coverage=1 00:22:46.983 --rc genhtml_legend=1 00:22:46.983 --rc geninfo_all_blocks=1 00:22:46.983 --rc geninfo_unexecuted_blocks=1 00:22:46.983 00:22:46.983 ' 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:46.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.983 --rc genhtml_branch_coverage=1 00:22:46.983 --rc genhtml_function_coverage=1 00:22:46.983 --rc genhtml_legend=1 00:22:46.983 --rc geninfo_all_blocks=1 00:22:46.983 --rc geninfo_unexecuted_blocks=1 00:22:46.983 00:22:46.983 ' 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.983 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.242 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:47.242 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:47.242 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.242 04:09:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.809 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:53.810 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:53.810 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:53.810 Found net devices under 0000:af:00.0: cvl_0_0 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:53.810 Found net devices under 0000:af:00.1: cvl_0_1 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:53.810 04:09:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:53.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:22:53.810 00:22:53.810 --- 10.0.0.2 ping statistics --- 00:22:53.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.810 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:22:53.810 00:22:53.810 --- 10.0.0.1 ping statistics --- 00:22:53.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.810 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=139031 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 139031 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 139031 ']' 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.810 04:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.810 [2024-12-10 04:09:52.187050] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:53.810 [2024-12-10 04:09:52.187106] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.810 [2024-12-10 04:09:52.267337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.810 [2024-12-10 04:09:52.308448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.810 [2024-12-10 04:09:52.308489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.810 [2024-12-10 04:09:52.308496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.810 [2024-12-10 04:09:52.308502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.810 [2024-12-10 04:09:52.308507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.810 [2024-12-10 04:09:52.309973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.810 [2024-12-10 04:09:52.310080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.810 [2024-12-10 04:09:52.310108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.810 [2024-12-10 04:09:52.310110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.810 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.810 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:53.810 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:53.810 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.811 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.811 [2024-12-10 04:09:53.037311] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.811 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.811 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:53.811 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.811 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.811 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:53.811 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.811 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:54.073 Malloc0 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:54.073 [2024-12-10 04:09:53.130788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:54.073 [ 00:22:54.073 { 00:22:54.073 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:54.073 "subtype": "Discovery", 00:22:54.073 "listen_addresses": [ 00:22:54.073 { 00:22:54.073 "trtype": "TCP", 00:22:54.073 "adrfam": "IPv4", 00:22:54.073 "traddr": "10.0.0.2", 00:22:54.073 "trsvcid": "4420" 00:22:54.073 } 00:22:54.073 ], 00:22:54.073 "allow_any_host": true, 00:22:54.073 "hosts": [] 00:22:54.073 }, 00:22:54.073 { 00:22:54.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.073 "subtype": "NVMe", 00:22:54.073 "listen_addresses": [ 00:22:54.073 { 00:22:54.073 "trtype": "TCP", 00:22:54.073 "adrfam": "IPv4", 00:22:54.073 "traddr": "10.0.0.2", 00:22:54.073 "trsvcid": "4420" 00:22:54.073 } 00:22:54.073 ], 00:22:54.073 "allow_any_host": true, 00:22:54.073 "hosts": [], 00:22:54.073 "serial_number": "SPDK00000000000001", 00:22:54.073 "model_number": "SPDK bdev Controller", 00:22:54.073 "max_namespaces": 32, 00:22:54.073 "min_cntlid": 1, 00:22:54.073 "max_cntlid": 65519, 00:22:54.073 "namespaces": [ 00:22:54.073 { 00:22:54.073 "nsid": 1, 00:22:54.073 "bdev_name": "Malloc0", 00:22:54.073 "name": "Malloc0", 00:22:54.073 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:54.073 "eui64": "ABCDEF0123456789", 00:22:54.073 "uuid": "22264476-f08d-4754-95cd-592927b01571" 00:22:54.073 } 00:22:54.073 ] 00:22:54.073 } 00:22:54.073 ] 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.073 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:54.073 [2024-12-10 04:09:53.183432] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:54.073 [2024-12-10 04:09:53.183475] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139275 ] 00:22:54.073 [2024-12-10 04:09:53.223664] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:54.073 [2024-12-10 04:09:53.223707] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:54.073 [2024-12-10 04:09:53.223712] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:54.073 [2024-12-10 04:09:53.223722] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:54.073 [2024-12-10 04:09:53.223732] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:54.073 [2024-12-10 04:09:53.227389] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:54.073 [2024-12-10 04:09:53.227425] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xee1690 0 00:22:54.073 [2024-12-10 04:09:53.235176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:54.074 [2024-12-10 04:09:53.235189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:54.074 [2024-12-10 04:09:53.235193] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:54.074 [2024-12-10 04:09:53.235197] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:54.074 [2024-12-10 04:09:53.235230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.235236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.235239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee1690) 00:22:54.074 [2024-12-10 04:09:53.235252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:54.074 [2024-12-10 04:09:53.235269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43100, cid 0, qid 0 00:22:54.074 [2024-12-10 04:09:53.243175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.074 [2024-12-10 04:09:53.243184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.074 [2024-12-10 04:09:53.243187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43100) on tqpair=0xee1690 00:22:54.074 [2024-12-10 04:09:53.243204] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:54.074 [2024-12-10 04:09:53.243211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:54.074 [2024-12-10 04:09:53.243215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:54.074 [2024-12-10 04:09:53.243227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee1690) 00:22:54.074 [2024-12-10 04:09:53.243241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.074 [2024-12-10 04:09:53.243254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43100, cid 0, qid 0 00:22:54.074 [2024-12-10 04:09:53.243417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.074 [2024-12-10 04:09:53.243422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.074 [2024-12-10 04:09:53.243425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43100) on tqpair=0xee1690 00:22:54.074 [2024-12-10 04:09:53.243434] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:54.074 [2024-12-10 04:09:53.243441] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:54.074 [2024-12-10 04:09:53.243447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee1690) 00:22:54.074 [2024-12-10 04:09:53.243459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.074 [2024-12-10 04:09:53.243469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43100, cid 0, qid 0 00:22:54.074 [2024-12-10 04:09:53.243546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.074 [2024-12-10 04:09:53.243552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.074 [2024-12-10 04:09:53.243555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43100) on tqpair=0xee1690 00:22:54.074 [2024-12-10 04:09:53.243563] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:54.074 [2024-12-10 04:09:53.243570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:54.074 [2024-12-10 04:09:53.243576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee1690) 00:22:54.074 [2024-12-10 04:09:53.243588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.074 [2024-12-10 04:09:53.243597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43100, cid 0, qid 0 00:22:54.074 [2024-12-10 04:09:53.243658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.074 [2024-12-10 04:09:53.243664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.074 [2024-12-10 04:09:53.243667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43100) on tqpair=0xee1690 00:22:54.074 [2024-12-10 04:09:53.243675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:54.074 [2024-12-10 04:09:53.243684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee1690) 00:22:54.074 [2024-12-10 04:09:53.243696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.074 [2024-12-10 04:09:53.243705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43100, cid 0, qid 0 00:22:54.074 [2024-12-10 04:09:53.243775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.074 [2024-12-10 04:09:53.243781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.074 [2024-12-10 04:09:53.243783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43100) on tqpair=0xee1690 00:22:54.074 [2024-12-10 04:09:53.243791] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:54.074 [2024-12-10 04:09:53.243795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:54.074 [2024-12-10 04:09:53.243803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:54.074 [2024-12-10 04:09:53.243910] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:54.074 [2024-12-10 04:09:53.243914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:54.074 [2024-12-10 04:09:53.243922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.243928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee1690) 00:22:54.074 [2024-12-10 04:09:53.243935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.074 [2024-12-10 04:09:53.243946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43100, cid 0, qid 0 00:22:54.074 [2024-12-10 04:09:53.244007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.074 [2024-12-10 04:09:53.244013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.074 [2024-12-10 04:09:53.244016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.244020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43100) on tqpair=0xee1690 00:22:54.074 [2024-12-10 04:09:53.244024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:54.074 [2024-12-10 04:09:53.244031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.244035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.244038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee1690) 00:22:54.074 [2024-12-10 04:09:53.244044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.074 [2024-12-10 04:09:53.244053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43100, cid 0, qid 0 00:22:54.074 [2024-12-10 04:09:53.244124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.074 [2024-12-10 04:09:53.244129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.074 [2024-12-10 04:09:53.244132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.244136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43100) on tqpair=0xee1690 00:22:54.074 [2024-12-10 04:09:53.244140] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:54.074 [2024-12-10 04:09:53.244144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:54.074 [2024-12-10 04:09:53.244151] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:54.074 [2024-12-10 04:09:53.244157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:54.074 [2024-12-10 04:09:53.244172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.244175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee1690) 00:22:54.074 [2024-12-10 04:09:53.244181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.074 [2024-12-10 04:09:53.244191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43100, cid 0, qid 0 00:22:54.074 [2024-12-10 04:09:53.244276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.074 [2024-12-10 04:09:53.244282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.074 [2024-12-10 04:09:53.244285] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.244288] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee1690): datao=0, datal=4096, cccid=0 00:22:54.074 [2024-12-10 04:09:53.244293] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf43100) on tqpair(0xee1690): expected_datao=0, payload_size=4096 00:22:54.074 [2024-12-10 04:09:53.244297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.244303] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.244307] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.244319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.074 [2024-12-10 04:09:53.244329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.074 [2024-12-10 04:09:53.244332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.074 [2024-12-10 04:09:53.244335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43100) on tqpair=0xee1690 00:22:54.074 [2024-12-10 04:09:53.244343] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:54.075 [2024-12-10 04:09:53.244347] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:54.075 [2024-12-10 04:09:53.244351] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:54.075 [2024-12-10 04:09:53.244356] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:54.075 [2024-12-10 04:09:53.244360] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:54.075 [2024-12-10 04:09:53.244364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:54.075 [2024-12-10 04:09:53.244372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:54.075 [2024-12-10 04:09:53.244378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee1690) 00:22:54.075 [2024-12-10 04:09:53.244391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.075 [2024-12-10 04:09:53.244401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43100, cid 0, qid 0 00:22:54.075 [2024-12-10 04:09:53.244464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.075 [2024-12-10 04:09:53.244470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.075 [2024-12-10 04:09:53.244473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43100) on tqpair=0xee1690 00:22:54.075 [2024-12-10 04:09:53.244483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xee1690) 00:22:54.075 [2024-12-10 04:09:53.244495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.075 [2024-12-10 04:09:53.244500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xee1690) 00:22:54.075 [2024-12-10 04:09:53.244511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.075 [2024-12-10 04:09:53.244516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xee1690) 00:22:54.075 [2024-12-10 04:09:53.244527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.075 [2024-12-10 04:09:53.244532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.075 [2024-12-10 04:09:53.244545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.075 [2024-12-10 04:09:53.244549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:54.075 [2024-12-10 04:09:53.244560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:54.075 [2024-12-10 04:09:53.244566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee1690) 00:22:54.075 [2024-12-10 04:09:53.244574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.075 [2024-12-10 04:09:53.244586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43100, cid 0, qid 0 00:22:54.075 [2024-12-10 04:09:53.244590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43280, cid 1, qid 0 00:22:54.075 [2024-12-10 04:09:53.244594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43400, cid 2, qid 0 00:22:54.075 [2024-12-10 04:09:53.244598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.075 [2024-12-10 04:09:53.244603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43700, cid 4, qid 0 00:22:54.075 [2024-12-10 04:09:53.244693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.075 [2024-12-10 04:09:53.244699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.075 [2024-12-10 04:09:53.244702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43700) on tqpair=0xee1690 00:22:54.075 [2024-12-10 04:09:53.244710] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:54.075 [2024-12-10 04:09:53.244714] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:54.075 [2024-12-10 04:09:53.244724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee1690) 00:22:54.075 [2024-12-10 04:09:53.244733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.075 [2024-12-10 04:09:53.244742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43700, cid 4, qid 0 00:22:54.075 [2024-12-10 04:09:53.244807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.075 [2024-12-10 04:09:53.244813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.075 [2024-12-10 04:09:53.244816] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244819] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee1690): datao=0, datal=4096, cccid=4 00:22:54.075 [2024-12-10 04:09:53.244823] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf43700) on tqpair(0xee1690): expected_datao=0, payload_size=4096 00:22:54.075 [2024-12-10 04:09:53.244827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244841] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244845] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.075 [2024-12-10 04:09:53.244881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.075 [2024-12-10 04:09:53.244884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43700) on tqpair=0xee1690 00:22:54.075 [2024-12-10 04:09:53.244900] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:54.075 [2024-12-10 04:09:53.244922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee1690) 00:22:54.075 [2024-12-10 04:09:53.244932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.075 [2024-12-10 04:09:53.244937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.244943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xee1690) 00:22:54.075 [2024-12-10 04:09:53.244949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.075 [2024-12-10 04:09:53.244962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43700, cid 4, qid 0 00:22:54.075 [2024-12-10 04:09:53.244967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43880, cid 5, qid 0 00:22:54.075 [2024-12-10 04:09:53.245068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.075 [2024-12-10 04:09:53.245074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.075 [2024-12-10 04:09:53.245077] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.245080] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee1690): datao=0, datal=1024, cccid=4 00:22:54.075 [2024-12-10 04:09:53.245084] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf43700) on tqpair(0xee1690): expected_datao=0, payload_size=1024 00:22:54.075 [2024-12-10 04:09:53.245087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.245093] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.245096] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.245101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.075 [2024-12-10 04:09:53.245105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.075 [2024-12-10 04:09:53.245109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.245112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43880) on tqpair=0xee1690 00:22:54.075 [2024-12-10 04:09:53.285318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.075 [2024-12-10 04:09:53.285329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.075 [2024-12-10 04:09:53.285333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.285336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43700) on tqpair=0xee1690 00:22:54.075 [2024-12-10 04:09:53.285348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.285351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee1690) 00:22:54.075 [2024-12-10 04:09:53.285357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.075 [2024-12-10 04:09:53.285373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43700, cid 4, qid 0 00:22:54.075 [2024-12-10 04:09:53.285446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.075 [2024-12-10 04:09:53.285452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.075 [2024-12-10 04:09:53.285455] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.285458] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee1690): datao=0, datal=3072, cccid=4 00:22:54.075 [2024-12-10 04:09:53.285462] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf43700) on tqpair(0xee1690): expected_datao=0, payload_size=3072 00:22:54.075 [2024-12-10 04:09:53.285469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.075 [2024-12-10 04:09:53.285475] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.076 [2024-12-10 04:09:53.285478] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.076 [2024-12-10 04:09:53.285500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.076 [2024-12-10 04:09:53.285506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.076 [2024-12-10 04:09:53.285509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.076 [2024-12-10 04:09:53.285512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43700) on tqpair=0xee1690 00:22:54.076 [2024-12-10 04:09:53.285520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.076 [2024-12-10 04:09:53.285523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xee1690) 00:22:54.076 [2024-12-10 04:09:53.285529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.076 [2024-12-10 04:09:53.285542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43700, cid 4, qid 0 00:22:54.076 [2024-12-10 04:09:53.285612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.076 [2024-12-10 04:09:53.285617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.076 [2024-12-10 04:09:53.285621] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.076 [2024-12-10 04:09:53.285624] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xee1690): datao=0, datal=8, cccid=4 00:22:54.076 [2024-12-10 04:09:53.285628] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf43700) on tqpair(0xee1690): expected_datao=0, payload_size=8 00:22:54.076 [2024-12-10 04:09:53.285631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.076 [2024-12-10 04:09:53.285637] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.076 [2024-12-10 04:09:53.285640] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.076 [2024-12-10 04:09:53.330180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.076 [2024-12-10 04:09:53.330191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.076 [2024-12-10 04:09:53.330194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.076 [2024-12-10 04:09:53.330197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43700) on tqpair=0xee1690 00:22:54.076 ===================================================== 00:22:54.076 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:54.076 ===================================================== 00:22:54.076 Controller Capabilities/Features 00:22:54.076 ================================ 00:22:54.076 Vendor ID: 0000 00:22:54.076 Subsystem Vendor ID: 0000 00:22:54.076 Serial Number: .................... 00:22:54.076 Model Number: ........................................ 00:22:54.076 Firmware Version: 25.01 00:22:54.076 Recommended Arb Burst: 0 00:22:54.076 IEEE OUI Identifier: 00 00 00 00:22:54.076 Multi-path I/O 00:22:54.076 May have multiple subsystem ports: No 00:22:54.076 May have multiple controllers: No 00:22:54.076 Associated with SR-IOV VF: No 00:22:54.076 Max Data Transfer Size: 131072 00:22:54.076 Max Number of Namespaces: 0 00:22:54.076 Max Number of I/O Queues: 1024 00:22:54.076 NVMe Specification Version (VS): 1.3 00:22:54.076 NVMe Specification Version (Identify): 1.3 00:22:54.076 Maximum Queue Entries: 128 00:22:54.076 Contiguous Queues Required: Yes 00:22:54.076 Arbitration Mechanisms Supported 00:22:54.076 Weighted Round Robin: Not Supported 00:22:54.076 Vendor Specific: Not Supported 00:22:54.076 Reset Timeout: 15000 ms 00:22:54.076 Doorbell Stride: 4 bytes 00:22:54.076 NVM Subsystem Reset: Not Supported 00:22:54.076 Command Sets Supported 00:22:54.076 NVM Command Set: Supported 00:22:54.076 Boot Partition: Not Supported 00:22:54.076 Memory Page Size Minimum: 4096 bytes 00:22:54.076 Memory Page Size Maximum: 4096 bytes 00:22:54.076 Persistent Memory Region: Not Supported 00:22:54.076 Optional Asynchronous Events Supported 00:22:54.076 Namespace Attribute Notices: Not Supported 00:22:54.076 Firmware Activation Notices: Not Supported 00:22:54.076 ANA Change Notices: Not Supported 00:22:54.076 PLE Aggregate Log Change Notices: Not Supported 00:22:54.076 LBA Status Info Alert Notices: Not Supported 00:22:54.076 EGE Aggregate Log Change Notices: Not Supported 00:22:54.076 Normal NVM Subsystem Shutdown event: Not Supported 00:22:54.076 Zone Descriptor Change Notices: Not Supported 00:22:54.076 Discovery Log Change Notices: Supported 00:22:54.076 Controller Attributes 00:22:54.076 128-bit Host Identifier: Not Supported 00:22:54.076 Non-Operational Permissive Mode: Not Supported 00:22:54.076 NVM Sets: Not Supported 00:22:54.076 Read Recovery Levels: Not Supported 00:22:54.076 Endurance Groups: Not Supported 00:22:54.076 Predictable Latency Mode: Not Supported 00:22:54.076 Traffic Based Keep ALive: Not Supported 00:22:54.076 Namespace Granularity: Not Supported 00:22:54.076 SQ Associations: Not Supported 00:22:54.076 UUID List: Not Supported 00:22:54.076 Multi-Domain Subsystem: Not Supported 00:22:54.076 Fixed Capacity Management: Not Supported 00:22:54.076 Variable Capacity Management: Not Supported 00:22:54.076 Delete Endurance Group: Not Supported 00:22:54.076 Delete NVM Set: Not Supported 00:22:54.076 Extended LBA Formats Supported: Not Supported 00:22:54.076 Flexible Data Placement Supported: Not Supported 00:22:54.076 00:22:54.076 Controller Memory Buffer Support 00:22:54.076 ================================ 00:22:54.076 Supported: No 00:22:54.076 00:22:54.076 Persistent Memory Region Support 00:22:54.076 ================================ 00:22:54.076 Supported: No 00:22:54.076 00:22:54.076 Admin Command Set Attributes 00:22:54.076 ============================ 00:22:54.076 Security Send/Receive: Not Supported 00:22:54.076 Format NVM: Not Supported 00:22:54.076 Firmware Activate/Download: Not Supported 00:22:54.076 Namespace Management: Not Supported 00:22:54.076 Device Self-Test: Not Supported 00:22:54.076 Directives: Not Supported 00:22:54.076 NVMe-MI: Not Supported 00:22:54.076 Virtualization Management: Not Supported 00:22:54.076 Doorbell Buffer Config: Not Supported 00:22:54.076 Get LBA Status Capability: Not Supported 00:22:54.076 Command & Feature Lockdown Capability: Not Supported 00:22:54.076 Abort Command Limit: 1 00:22:54.076 Async Event Request Limit: 4 00:22:54.076 Number of Firmware Slots: N/A 00:22:54.076 Firmware Slot 1 Read-Only: N/A 00:22:54.076 Firmware Activation Without Reset: N/A 00:22:54.076 Multiple Update Detection Support: N/A 00:22:54.076 Firmware Update Granularity: No Information Provided 00:22:54.076 Per-Namespace SMART Log: No 00:22:54.076 Asymmetric Namespace Access Log Page: Not Supported 00:22:54.076 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:54.076 Command Effects Log Page: Not Supported 00:22:54.076 Get Log Page Extended Data: Supported 00:22:54.076 Telemetry Log Pages: Not Supported 00:22:54.076 Persistent Event Log Pages: Not Supported 00:22:54.076 Supported Log Pages Log Page: May Support 00:22:54.076 Commands Supported & Effects Log Page: Not Supported 00:22:54.076 Feature Identifiers & Effects Log Page:May Support 00:22:54.076 NVMe-MI Commands & Effects Log Page: May Support 00:22:54.076 Data Area 4 for Telemetry Log: Not Supported 00:22:54.076 Error Log Page Entries Supported: 128 00:22:54.076 Keep Alive: Not Supported 00:22:54.076 00:22:54.076 NVM Command Set Attributes 00:22:54.076 ========================== 00:22:54.076 Submission Queue Entry Size 00:22:54.076 Max: 1 00:22:54.076 Min: 1 00:22:54.076 Completion Queue Entry Size 00:22:54.076 Max: 1 00:22:54.076 Min: 1 00:22:54.076 Number of Namespaces: 0 00:22:54.076 Compare Command: Not Supported 00:22:54.076 Write Uncorrectable Command: Not Supported 00:22:54.076 Dataset Management Command: Not Supported 00:22:54.076 Write Zeroes Command: Not Supported 00:22:54.076 Set Features Save Field: Not Supported 00:22:54.076 Reservations: Not Supported 00:22:54.076 Timestamp: Not Supported 00:22:54.076 Copy: Not Supported 00:22:54.076 Volatile Write Cache: Not Present 00:22:54.076 Atomic Write Unit (Normal): 1 00:22:54.076 Atomic Write Unit (PFail): 1 00:22:54.076 Atomic Compare & Write Unit: 1 00:22:54.076 Fused Compare & Write: Supported 00:22:54.076 Scatter-Gather List 00:22:54.076 SGL Command Set: Supported 00:22:54.076 SGL Keyed: Supported 00:22:54.076 SGL Bit Bucket Descriptor: Not Supported 00:22:54.076 SGL Metadata Pointer: Not Supported 00:22:54.076 Oversized SGL: Not Supported 00:22:54.076 SGL Metadata Address: Not Supported 00:22:54.076 SGL Offset: Supported 00:22:54.076 Transport SGL Data Block: Not Supported 00:22:54.076 Replay Protected Memory Block: Not Supported 00:22:54.076 00:22:54.076 Firmware Slot Information 00:22:54.076 ========================= 00:22:54.076 Active slot: 0 00:22:54.076 00:22:54.076 00:22:54.076 Error Log 00:22:54.077 ========= 00:22:54.077 00:22:54.077 Active Namespaces 00:22:54.077 ================= 00:22:54.077 Discovery Log Page 00:22:54.077 ================== 00:22:54.077 Generation Counter: 2 00:22:54.077 Number of Records: 2 00:22:54.077 Record Format: 0 00:22:54.077 00:22:54.077 Discovery Log Entry 0 00:22:54.077 ---------------------- 00:22:54.077 Transport Type: 3 (TCP) 00:22:54.077 Address Family: 1 (IPv4) 00:22:54.077 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:54.077 Entry Flags: 00:22:54.077 Duplicate Returned Information: 1 00:22:54.077 Explicit Persistent Connection Support for Discovery: 1 00:22:54.077 Transport Requirements: 00:22:54.077 Secure Channel: Not Required 00:22:54.077 Port ID: 0 (0x0000) 00:22:54.077 Controller ID: 65535 (0xffff) 00:22:54.077 Admin Max SQ Size: 128 00:22:54.077 Transport Service Identifier: 4420 00:22:54.077 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:54.077 Transport Address: 10.0.0.2 00:22:54.077 Discovery Log Entry 1 00:22:54.077 ---------------------- 00:22:54.077 Transport Type: 3 (TCP) 00:22:54.077 Address Family: 1 (IPv4) 00:22:54.077 Subsystem Type: 2 (NVM Subsystem) 00:22:54.077 Entry Flags: 00:22:54.077 Duplicate Returned Information: 0 00:22:54.077 Explicit Persistent Connection Support for Discovery: 0 00:22:54.077 Transport Requirements: 00:22:54.077 Secure Channel: Not Required 00:22:54.077 Port ID: 0 (0x0000) 00:22:54.077 Controller ID: 65535 (0xffff) 00:22:54.077 Admin Max SQ Size: 128 00:22:54.077 Transport Service Identifier: 4420 00:22:54.077 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:54.077 Transport Address: 10.0.0.2 [2024-12-10 04:09:53.330281] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:54.077 [2024-12-10 04:09:53.330292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43100) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.330298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.077 [2024-12-10 04:09:53.330303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43280) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.330307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.077 [2024-12-10 04:09:53.330311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43400) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.330315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.077 [2024-12-10 04:09:53.330319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.330323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.077 [2024-12-10 04:09:53.330331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.077 [2024-12-10 04:09:53.330344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.077 [2024-12-10 04:09:53.330360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.077 [2024-12-10 04:09:53.330446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.077 [2024-12-10 04:09:53.330452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.077 [2024-12-10 04:09:53.330455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.330464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.077 [2024-12-10 04:09:53.330476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.077 [2024-12-10 04:09:53.330489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.077 [2024-12-10 04:09:53.330575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.077 [2024-12-10 04:09:53.330580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.077 [2024-12-10 04:09:53.330583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.330591] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:54.077 [2024-12-10 04:09:53.330595] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:54.077 [2024-12-10 04:09:53.330603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.077 [2024-12-10 04:09:53.330615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.077 [2024-12-10 04:09:53.330624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.077 [2024-12-10 04:09:53.330690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.077 [2024-12-10 04:09:53.330695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.077 [2024-12-10 04:09:53.330698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.330710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.077 [2024-12-10 04:09:53.330723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.077 [2024-12-10 04:09:53.330733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.077 [2024-12-10 04:09:53.330808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.077 [2024-12-10 04:09:53.330814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.077 [2024-12-10 04:09:53.330816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.330827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.077 [2024-12-10 04:09:53.330841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.077 [2024-12-10 04:09:53.330851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.077 [2024-12-10 04:09:53.330927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.077 [2024-12-10 04:09:53.330932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.077 [2024-12-10 04:09:53.330935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.330947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.330953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.077 [2024-12-10 04:09:53.330959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.077 [2024-12-10 04:09:53.330969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.077 [2024-12-10 04:09:53.331042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.077 [2024-12-10 04:09:53.331047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.077 [2024-12-10 04:09:53.331050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.331054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.331062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.331065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.331068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.077 [2024-12-10 04:09:53.331074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.077 [2024-12-10 04:09:53.331082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.077 [2024-12-10 04:09:53.331149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.077 [2024-12-10 04:09:53.331155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.077 [2024-12-10 04:09:53.331158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.331161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.331174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.331178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.331181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.077 [2024-12-10 04:09:53.331187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.077 [2024-12-10 04:09:53.331196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.077 [2024-12-10 04:09:53.331260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.077 [2024-12-10 04:09:53.331266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.077 [2024-12-10 04:09:53.331268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.077 [2024-12-10 04:09:53.331272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.077 [2024-12-10 04:09:53.331279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.078 [2024-12-10 04:09:53.331293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.078 [2024-12-10 04:09:53.331303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.078 [2024-12-10 04:09:53.331370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.078 [2024-12-10 04:09:53.331375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.078 [2024-12-10 04:09:53.331378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.078 [2024-12-10 04:09:53.331391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.078 [2024-12-10 04:09:53.331403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.078 [2024-12-10 04:09:53.331412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.078 [2024-12-10 04:09:53.331473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.078 [2024-12-10 04:09:53.331478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.078 [2024-12-10 04:09:53.331481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.078 [2024-12-10 04:09:53.331492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.078 [2024-12-10 04:09:53.331504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.078 [2024-12-10 04:09:53.331514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.078 [2024-12-10 04:09:53.331571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.078 [2024-12-10 04:09:53.331577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.078 [2024-12-10 04:09:53.331580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.078 [2024-12-10 04:09:53.331591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.078 [2024-12-10 04:09:53.331603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.078 [2024-12-10 04:09:53.331612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.078 [2024-12-10 04:09:53.331668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.078 [2024-12-10 04:09:53.331674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.078 [2024-12-10 04:09:53.331677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.078 [2024-12-10 04:09:53.331687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.078 [2024-12-10 04:09:53.331700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.078 [2024-12-10 04:09:53.331710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.078 [2024-12-10 04:09:53.331769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.078 [2024-12-10 04:09:53.331775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.078 [2024-12-10 04:09:53.331778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.078 [2024-12-10 04:09:53.331789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.078 [2024-12-10 04:09:53.331801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.078 [2024-12-10 04:09:53.331810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.078 [2024-12-10 04:09:53.331886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.078 [2024-12-10 04:09:53.331891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.078 [2024-12-10 04:09:53.331894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.078 [2024-12-10 04:09:53.331905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.331912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.078 [2024-12-10 04:09:53.331917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.078 [2024-12-10 04:09:53.331926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.078 [2024-12-10 04:09:53.332004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.078 [2024-12-10 04:09:53.332010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.078 [2024-12-10 04:09:53.332013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.332016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.078 [2024-12-10 04:09:53.332024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.332027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.332031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.078 [2024-12-10 04:09:53.332036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.078 [2024-12-10 04:09:53.332045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.078 [2024-12-10 04:09:53.332120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.078 [2024-12-10 04:09:53.332125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.078 [2024-12-10 04:09:53.332128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.332132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.078 [2024-12-10 04:09:53.332140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.332143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.332146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.078 [2024-12-10 04:09:53.332152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.078 [2024-12-10 04:09:53.332160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.078 [2024-12-10 04:09:53.332219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.078 [2024-12-10 04:09:53.332225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.078 [2024-12-10 04:09:53.332228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.078 [2024-12-10 04:09:53.332232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.079 [2024-12-10 04:09:53.332240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.079 [2024-12-10 04:09:53.332252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.079 [2024-12-10 04:09:53.332262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.079 [2024-12-10 04:09:53.332337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.079 [2024-12-10 04:09:53.332343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.079 [2024-12-10 04:09:53.332346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.079 [2024-12-10 04:09:53.332357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.079 [2024-12-10 04:09:53.332369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.079 [2024-12-10 04:09:53.332378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.079 [2024-12-10 04:09:53.332455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.079 [2024-12-10 04:09:53.332461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.079 [2024-12-10 04:09:53.332464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.079 [2024-12-10 04:09:53.332475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.079 [2024-12-10 04:09:53.332487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.079 [2024-12-10 04:09:53.332496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.079 [2024-12-10 04:09:53.332573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.079 [2024-12-10 04:09:53.332578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.079 [2024-12-10 04:09:53.332581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.079 [2024-12-10 04:09:53.332592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.079 [2024-12-10 04:09:53.332604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.079 [2024-12-10 04:09:53.332613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.079 [2024-12-10 04:09:53.332675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.079 [2024-12-10 04:09:53.332683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.079 [2024-12-10 04:09:53.332686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.079 [2024-12-10 04:09:53.332698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.079 [2024-12-10 04:09:53.332710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.079 [2024-12-10 04:09:53.332719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.079 [2024-12-10 04:09:53.332777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.079 [2024-12-10 04:09:53.332783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.079 [2024-12-10 04:09:53.332786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.079 [2024-12-10 04:09:53.332797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.079 [2024-12-10 04:09:53.332809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.079 [2024-12-10 04:09:53.332818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.079 [2024-12-10 04:09:53.332894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.079 [2024-12-10 04:09:53.332900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.079 [2024-12-10 04:09:53.332903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.079 [2024-12-10 04:09:53.332914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.332920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.079 [2024-12-10 04:09:53.332926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.079 [2024-12-10 04:09:53.332934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.079 [2024-12-10 04:09:53.332993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.079 [2024-12-10 04:09:53.332999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.079 [2024-12-10 04:09:53.333002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.079 [2024-12-10 04:09:53.333013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.079 [2024-12-10 04:09:53.333025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.079 [2024-12-10 04:09:53.333035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.079 [2024-12-10 04:09:53.333091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.079 [2024-12-10 04:09:53.333097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.079 [2024-12-10 04:09:53.333102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.079 [2024-12-10 04:09:53.333113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.079 [2024-12-10 04:09:53.333125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.079 [2024-12-10 04:09:53.333134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.079 [2024-12-10 04:09:53.333210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.079 [2024-12-10 04:09:53.333216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.079 [2024-12-10 04:09:53.333219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.079 [2024-12-10 04:09:53.333230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.079 [2024-12-10 04:09:53.333242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.079 [2024-12-10 04:09:53.333252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.079 [2024-12-10 04:09:53.333312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.079 [2024-12-10 04:09:53.333318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.079 [2024-12-10 04:09:53.333321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.079 [2024-12-10 04:09:53.333332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.079 [2024-12-10 04:09:53.333344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.079 [2024-12-10 04:09:53.333353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.079 [2024-12-10 04:09:53.333428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.079 [2024-12-10 04:09:53.333434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.079 [2024-12-10 04:09:53.333437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.079 [2024-12-10 04:09:53.333440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.080 [2024-12-10 04:09:53.333448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.080 [2024-12-10 04:09:53.333460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.080 [2024-12-10 04:09:53.333469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.080 [2024-12-10 04:09:53.333528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.080 [2024-12-10 04:09:53.333534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.080 [2024-12-10 04:09:53.333536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.080 [2024-12-10 04:09:53.333550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.080 [2024-12-10 04:09:53.333562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.080 [2024-12-10 04:09:53.333572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.080 [2024-12-10 04:09:53.333646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.080 [2024-12-10 04:09:53.333652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.080 [2024-12-10 04:09:53.333655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.080 [2024-12-10 04:09:53.333666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.080 [2024-12-10 04:09:53.333678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.080 [2024-12-10 04:09:53.333687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.080 [2024-12-10 04:09:53.333746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.080 [2024-12-10 04:09:53.333752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.080 [2024-12-10 04:09:53.333755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.080 [2024-12-10 04:09:53.333766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.080 [2024-12-10 04:09:53.333778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.080 [2024-12-10 04:09:53.333787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.080 [2024-12-10 04:09:53.333845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.080 [2024-12-10 04:09:53.333850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.080 [2024-12-10 04:09:53.333853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.080 [2024-12-10 04:09:53.333864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.080 [2024-12-10 04:09:53.333876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.080 [2024-12-10 04:09:53.333885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.080 [2024-12-10 04:09:53.333944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.080 [2024-12-10 04:09:53.333950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.080 [2024-12-10 04:09:53.333953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.080 [2024-12-10 04:09:53.333964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.333972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.080 [2024-12-10 04:09:53.333978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.080 [2024-12-10 04:09:53.333987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.080 [2024-12-10 04:09:53.334043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.080 [2024-12-10 04:09:53.334049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.080 [2024-12-10 04:09:53.334052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.334055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.080 [2024-12-10 04:09:53.334063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.334067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.334070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.080 [2024-12-10 04:09:53.334075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.080 [2024-12-10 04:09:53.334085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.080 [2024-12-10 04:09:53.338174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.080 [2024-12-10 04:09:53.338182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.080 [2024-12-10 04:09:53.338185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.338188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.080 [2024-12-10 04:09:53.338199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.338202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.338205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xee1690) 00:22:54.080 [2024-12-10 04:09:53.338211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.080 [2024-12-10 04:09:53.338222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf43580, cid 3, qid 0 00:22:54.080 [2024-12-10 04:09:53.338372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.080 [2024-12-10 04:09:53.338377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.080 [2024-12-10 04:09:53.338380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.080 [2024-12-10 04:09:53.338384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf43580) on tqpair=0xee1690 00:22:54.080 [2024-12-10 04:09:53.338390] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:22:54.080 00:22:54.080 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:54.343 [2024-12-10 04:09:53.376114] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:22:54.343 [2024-12-10 04:09:53.376157] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139280 ] 00:22:54.343 [2024-12-10 04:09:53.416363] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:54.343 [2024-12-10 04:09:53.416406] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:54.343 [2024-12-10 04:09:53.416411] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:54.343 [2024-12-10 04:09:53.416421] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:54.343 [2024-12-10 04:09:53.416428] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:54.343 [2024-12-10 04:09:53.420312] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:54.343 [2024-12-10 04:09:53.420341] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13c9690 0 00:22:54.343 [2024-12-10 04:09:53.427178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:54.343 [2024-12-10 04:09:53.427191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:54.343 [2024-12-10 04:09:53.427196] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:54.343 [2024-12-10 04:09:53.427198] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:54.343 [2024-12-10 04:09:53.427226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.343 [2024-12-10 04:09:53.427231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.343 [2024-12-10 04:09:53.427235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c9690) 00:22:54.343 [2024-12-10 04:09:53.427244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:54.343 [2024-12-10 04:09:53.427261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b100, cid 0, qid 0 00:22:54.343 [2024-12-10 04:09:53.435177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.343 [2024-12-10 04:09:53.435185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.343 [2024-12-10 04:09:53.435188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.343 [2024-12-10 04:09:53.435191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b100) on tqpair=0x13c9690 00:22:54.343 [2024-12-10 04:09:53.435202] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:54.343 [2024-12-10 04:09:53.435208] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:54.343 [2024-12-10 04:09:53.435213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:54.343 [2024-12-10 04:09:53.435223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.343 [2024-12-10 04:09:53.435227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.343 [2024-12-10 04:09:53.435230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c9690) 00:22:54.343 [2024-12-10 04:09:53.435237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.343 [2024-12-10 04:09:53.435249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b100, cid 0, qid 0 00:22:54.343 [2024-12-10 04:09:53.435406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.343 [2024-12-10 04:09:53.435412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.343 [2024-12-10 04:09:53.435414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.343 [2024-12-10 04:09:53.435418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b100) on tqpair=0x13c9690 00:22:54.343 [2024-12-10 04:09:53.435422] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:54.343 [2024-12-10 04:09:53.435429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:54.343 [2024-12-10 04:09:53.435435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.343 [2024-12-10 04:09:53.435438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.343 [2024-12-10 04:09:53.435444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c9690) 00:22:54.343 [2024-12-10 04:09:53.435450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.343 [2024-12-10 04:09:53.435460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b100, cid 0, qid 0 00:22:54.343 [2024-12-10 04:09:53.435524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.343 [2024-12-10 04:09:53.435529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.343 [2024-12-10 04:09:53.435532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.343 [2024-12-10 04:09:53.435535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b100) on tqpair=0x13c9690 00:22:54.343 [2024-12-10 04:09:53.435540] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:54.344 [2024-12-10 04:09:53.435547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:54.344 [2024-12-10 04:09:53.435553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.435556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.435559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c9690) 00:22:54.344 [2024-12-10 04:09:53.435565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.344 [2024-12-10 04:09:53.435574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b100, cid 0, qid 0 00:22:54.344 [2024-12-10 04:09:53.435633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.344 [2024-12-10 04:09:53.435639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.344 [2024-12-10 04:09:53.435642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.435645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b100) on tqpair=0x13c9690 00:22:54.344 [2024-12-10 04:09:53.435649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:54.344 [2024-12-10 04:09:53.435657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.435661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.435664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c9690) 00:22:54.344 [2024-12-10 04:09:53.435670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.344 [2024-12-10 04:09:53.435679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b100, cid 0, qid 0 00:22:54.344 [2024-12-10 04:09:53.435739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.344 [2024-12-10 04:09:53.435745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.344 [2024-12-10 04:09:53.435748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.435752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b100) on tqpair=0x13c9690 00:22:54.344 [2024-12-10 04:09:53.435755] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:54.344 [2024-12-10 04:09:53.435760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:54.344 [2024-12-10 04:09:53.435767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:54.344 [2024-12-10 04:09:53.435874] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:54.344 [2024-12-10 04:09:53.435879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:54.344 [2024-12-10 04:09:53.435888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.435891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.435894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c9690) 00:22:54.344 [2024-12-10 04:09:53.435899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.344 [2024-12-10 04:09:53.435909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b100, cid 0, qid 0 00:22:54.344 [2024-12-10 04:09:53.435967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.344 [2024-12-10 04:09:53.435973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.344 [2024-12-10 04:09:53.435976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.435979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b100) on tqpair=0x13c9690 00:22:54.344 [2024-12-10 04:09:53.435983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:54.344 [2024-12-10 04:09:53.435991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.435995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.435998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c9690) 00:22:54.344 [2024-12-10 04:09:53.436003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.344 [2024-12-10 04:09:53.436012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b100, cid 0, qid 0 00:22:54.344 [2024-12-10 04:09:53.436075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.344 [2024-12-10 04:09:53.436081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.344 [2024-12-10 04:09:53.436084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b100) on tqpair=0x13c9690 00:22:54.344 [2024-12-10 04:09:53.436091] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:54.344 [2024-12-10 04:09:53.436095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:54.344 [2024-12-10 04:09:53.436101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:54.344 [2024-12-10 04:09:53.436108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:54.344 [2024-12-10 04:09:53.436119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c9690) 00:22:54.344 [2024-12-10 04:09:53.436128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.344 [2024-12-10 04:09:53.436137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b100, cid 0, qid 0 00:22:54.344 [2024-12-10 04:09:53.436244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.344 [2024-12-10 04:09:53.436250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.344 [2024-12-10 04:09:53.436253] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436256] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c9690): datao=0, datal=4096, cccid=0 00:22:54.344 [2024-12-10 04:09:53.436260] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142b100) on tqpair(0x13c9690): expected_datao=0, payload_size=4096 00:22:54.344 [2024-12-10 04:09:53.436264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436272] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436276] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.344 [2024-12-10 04:09:53.436293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.344 [2024-12-10 04:09:53.436296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b100) on tqpair=0x13c9690 00:22:54.344 [2024-12-10 04:09:53.436305] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:54.344 [2024-12-10 04:09:53.436309] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:54.344 [2024-12-10 04:09:53.436313] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:54.344 [2024-12-10 04:09:53.436316] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:54.344 [2024-12-10 04:09:53.436320] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:54.344 [2024-12-10 04:09:53.436324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:54.344 [2024-12-10 04:09:53.436332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:54.344 [2024-12-10 04:09:53.436338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c9690) 00:22:54.344 [2024-12-10 04:09:53.436350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.344 [2024-12-10 04:09:53.436360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b100, cid 0, qid 0 00:22:54.344 [2024-12-10 04:09:53.436424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.344 [2024-12-10 04:09:53.436429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.344 [2024-12-10 04:09:53.436433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b100) on tqpair=0x13c9690 00:22:54.344 [2024-12-10 04:09:53.436441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c9690) 00:22:54.344 [2024-12-10 04:09:53.436452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.344 [2024-12-10 04:09:53.436457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13c9690) 00:22:54.344 [2024-12-10 04:09:53.436468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.344 [2024-12-10 04:09:53.436473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13c9690) 00:22:54.344 [2024-12-10 04:09:53.436484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.344 [2024-12-10 04:09:53.436491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.344 [2024-12-10 04:09:53.436502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.344 [2024-12-10 04:09:53.436506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:54.344 [2024-12-10 04:09:53.436516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:54.344 [2024-12-10 04:09:53.436521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.344 [2024-12-10 04:09:53.436525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c9690) 00:22:54.345 [2024-12-10 04:09:53.436530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.345 [2024-12-10 04:09:53.436540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b100, cid 0, qid 0 00:22:54.345 [2024-12-10 04:09:53.436545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b280, cid 1, qid 0 00:22:54.345 [2024-12-10 04:09:53.436549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b400, cid 2, qid 0 00:22:54.345 [2024-12-10 04:09:53.436553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.345 [2024-12-10 04:09:53.436557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:22:54.345 [2024-12-10 04:09:53.436650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.345 [2024-12-10 04:09:53.436656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.345 [2024-12-10 04:09:53.436659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.436662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c9690 00:22:54.345 [2024-12-10 04:09:53.436666] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:54.345 [2024-12-10 04:09:53.436671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.436680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.436685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.436691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.436694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.436697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c9690) 00:22:54.345 [2024-12-10 04:09:53.436702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.345 [2024-12-10 04:09:53.436711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:22:54.345 [2024-12-10 04:09:53.436777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.345 [2024-12-10 04:09:53.436783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.345 [2024-12-10 04:09:53.436786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.436789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c9690 00:22:54.345 [2024-12-10 04:09:53.436837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.436848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.436855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.436858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c9690) 00:22:54.345 [2024-12-10 04:09:53.436863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.345 [2024-12-10 04:09:53.436873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:22:54.345 [2024-12-10 04:09:53.436943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.345 [2024-12-10 04:09:53.436948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.345 [2024-12-10 04:09:53.436951] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.436954] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c9690): datao=0, datal=4096, cccid=4 00:22:54.345 [2024-12-10 04:09:53.436958] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142b700) on tqpair(0x13c9690): expected_datao=0, payload_size=4096 00:22:54.345 [2024-12-10 04:09:53.436962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.436973] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.436977] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.477302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.345 [2024-12-10 04:09:53.477314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.345 [2024-12-10 04:09:53.477317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.477321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c9690 00:22:54.345 [2024-12-10 04:09:53.477334] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:54.345 [2024-12-10 04:09:53.477346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.477356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.477363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.477366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c9690) 00:22:54.345 [2024-12-10 04:09:53.477373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.345 [2024-12-10 04:09:53.477385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:22:54.345 [2024-12-10 04:09:53.477473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.345 [2024-12-10 04:09:53.477479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.345 [2024-12-10 04:09:53.477482] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.477485] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c9690): datao=0, datal=4096, cccid=4 00:22:54.345 [2024-12-10 04:09:53.477488] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142b700) on tqpair(0x13c9690): expected_datao=0, payload_size=4096 00:22:54.345 [2024-12-10 04:09:53.477492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.477498] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.477501] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.518298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.345 [2024-12-10 04:09:53.518307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.345 [2024-12-10 04:09:53.518310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.518316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c9690 00:22:54.345 [2024-12-10 04:09:53.518327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.518336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.518343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.518346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c9690) 00:22:54.345 [2024-12-10 04:09:53.518353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.345 [2024-12-10 04:09:53.518364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:22:54.345 [2024-12-10 04:09:53.518443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.345 [2024-12-10 04:09:53.518449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.345 [2024-12-10 04:09:53.518452] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.518455] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c9690): datao=0, datal=4096, cccid=4 00:22:54.345 [2024-12-10 04:09:53.518459] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142b700) on tqpair(0x13c9690): expected_datao=0, payload_size=4096 00:22:54.345 [2024-12-10 04:09:53.518463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.518474] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.518478] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.559301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.345 [2024-12-10 04:09:53.559310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.345 [2024-12-10 04:09:53.559314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.559317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c9690 00:22:54.345 [2024-12-10 04:09:53.559328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.559336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.559343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.559350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.559354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.559359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.559363] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:54.345 [2024-12-10 04:09:53.559367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:54.345 [2024-12-10 04:09:53.559372] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:54.345 [2024-12-10 04:09:53.559385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.559388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c9690) 00:22:54.345 [2024-12-10 04:09:53.559395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.345 [2024-12-10 04:09:53.559403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.559406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.559409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13c9690) 00:22:54.345 [2024-12-10 04:09:53.559414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.345 [2024-12-10 04:09:53.559427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:22:54.345 [2024-12-10 04:09:53.559432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b880, cid 5, qid 0 00:22:54.345 [2024-12-10 04:09:53.559506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.345 [2024-12-10 04:09:53.559511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.345 [2024-12-10 04:09:53.559514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.345 [2024-12-10 04:09:53.559517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c9690 00:22:54.346 [2024-12-10 04:09:53.559523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.346 [2024-12-10 04:09:53.559528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.346 [2024-12-10 04:09:53.559531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.559534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b880) on tqpair=0x13c9690 00:22:54.346 [2024-12-10 04:09:53.559543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.559546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13c9690) 00:22:54.346 [2024-12-10 04:09:53.559551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.346 [2024-12-10 04:09:53.559561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b880, cid 5, qid 0 00:22:54.346 [2024-12-10 04:09:53.559630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.346 [2024-12-10 04:09:53.559635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.346 [2024-12-10 04:09:53.559638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.559642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b880) on tqpair=0x13c9690 00:22:54.346 [2024-12-10 04:09:53.559649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.559652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13c9690) 00:22:54.346 [2024-12-10 04:09:53.559657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.346 [2024-12-10 04:09:53.559667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b880, cid 5, qid 0 00:22:54.346 [2024-12-10 04:09:53.559726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.346 [2024-12-10 04:09:53.559731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.346 [2024-12-10 04:09:53.559734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.559737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b880) on tqpair=0x13c9690 00:22:54.346 [2024-12-10 04:09:53.559745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.559748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13c9690) 00:22:54.346 [2024-12-10 04:09:53.559753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.346 [2024-12-10 04:09:53.559762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b880, cid 5, qid 0 00:22:54.346 [2024-12-10 04:09:53.559821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.346 [2024-12-10 04:09:53.559828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.346 [2024-12-10 04:09:53.559831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.559834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b880) on tqpair=0x13c9690 00:22:54.346 [2024-12-10 04:09:53.559847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.559850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13c9690) 00:22:54.346 [2024-12-10 04:09:53.559856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.346 [2024-12-10 04:09:53.559861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.559865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c9690) 00:22:54.346 [2024-12-10 04:09:53.559870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.346 [2024-12-10 04:09:53.559876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.559879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x13c9690) 00:22:54.346 [2024-12-10 04:09:53.559884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.346 [2024-12-10 04:09:53.559890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.559893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13c9690) 00:22:54.346 [2024-12-10 04:09:53.559898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.346 [2024-12-10 04:09:53.559908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b880, cid 5, qid 0 00:22:54.346 [2024-12-10 04:09:53.559912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:22:54.346 [2024-12-10 04:09:53.559916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142ba00, cid 6, qid 0 00:22:54.346 [2024-12-10 04:09:53.559920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142bb80, cid 7, qid 0 00:22:54.346 [2024-12-10 04:09:53.560054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.346 [2024-12-10 04:09:53.560059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.346 [2024-12-10 04:09:53.560062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560065] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c9690): datao=0, datal=8192, cccid=5 00:22:54.346 [2024-12-10 04:09:53.560069] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142b880) on tqpair(0x13c9690): expected_datao=0, payload_size=8192 00:22:54.346 [2024-12-10 04:09:53.560073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560098] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560101] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.346 [2024-12-10 04:09:53.560110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.346 [2024-12-10 04:09:53.560113] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560116] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c9690): datao=0, datal=512, cccid=4 00:22:54.346 [2024-12-10 04:09:53.560120] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142b700) on tqpair(0x13c9690): expected_datao=0, payload_size=512 00:22:54.346 [2024-12-10 04:09:53.560124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560129] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560135] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.346 [2024-12-10 04:09:53.560145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.346 [2024-12-10 04:09:53.560148] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560151] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c9690): datao=0, datal=512, cccid=6 00:22:54.346 [2024-12-10 04:09:53.560154] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142ba00) on tqpair(0x13c9690): expected_datao=0, payload_size=512 00:22:54.346 [2024-12-10 04:09:53.560158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560163] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560172] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.346 [2024-12-10 04:09:53.560182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.346 [2024-12-10 04:09:53.560185] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560188] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c9690): datao=0, datal=4096, cccid=7 00:22:54.346 [2024-12-10 04:09:53.560191] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142bb80) on tqpair(0x13c9690): expected_datao=0, payload_size=4096 00:22:54.346 [2024-12-10 04:09:53.560195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560200] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560203] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.346 [2024-12-10 04:09:53.560215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.346 [2024-12-10 04:09:53.560218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b880) on tqpair=0x13c9690 00:22:54.346 [2024-12-10 04:09:53.560231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.346 [2024-12-10 04:09:53.560236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.346 [2024-12-10 04:09:53.560239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c9690 00:22:54.346 [2024-12-10 04:09:53.560250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.346 [2024-12-10 04:09:53.560255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.346 [2024-12-10 04:09:53.560258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142ba00) on tqpair=0x13c9690 00:22:54.346 [2024-12-10 04:09:53.560267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.346 [2024-12-10 04:09:53.560271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.346 [2024-12-10 04:09:53.560274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.346 [2024-12-10 04:09:53.560277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142bb80) on tqpair=0x13c9690 00:22:54.346 ===================================================== 00:22:54.346 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:54.346 ===================================================== 00:22:54.346 Controller Capabilities/Features 00:22:54.346 ================================ 00:22:54.346 Vendor ID: 8086 00:22:54.346 Subsystem Vendor ID: 8086 00:22:54.346 Serial Number: SPDK00000000000001 00:22:54.346 Model Number: SPDK bdev Controller 00:22:54.346 Firmware Version: 25.01 00:22:54.346 Recommended Arb Burst: 6 00:22:54.346 IEEE OUI Identifier: e4 d2 5c 00:22:54.346 Multi-path I/O 00:22:54.346 May have multiple subsystem ports: Yes 00:22:54.346 May have multiple controllers: Yes 00:22:54.346 Associated with SR-IOV VF: No 00:22:54.346 Max Data Transfer Size: 131072 00:22:54.346 Max Number of Namespaces: 32 00:22:54.346 Max Number of I/O Queues: 127 00:22:54.346 NVMe Specification Version (VS): 1.3 00:22:54.346 NVMe Specification Version (Identify): 1.3 00:22:54.346 Maximum Queue Entries: 128 00:22:54.346 Contiguous Queues Required: Yes 00:22:54.346 Arbitration Mechanisms Supported 00:22:54.346 Weighted Round Robin: Not Supported 00:22:54.346 Vendor Specific: Not Supported 00:22:54.346 Reset Timeout: 15000 ms 00:22:54.346 Doorbell Stride: 4 bytes 00:22:54.346 NVM Subsystem Reset: Not Supported 00:22:54.346 Command Sets Supported 00:22:54.347 NVM Command Set: Supported 00:22:54.347 Boot Partition: Not Supported 00:22:54.347 Memory Page Size Minimum: 4096 bytes 00:22:54.347 Memory Page Size Maximum: 4096 bytes 00:22:54.347 Persistent Memory Region: Not Supported 00:22:54.347 Optional Asynchronous Events Supported 00:22:54.347 Namespace Attribute Notices: Supported 00:22:54.347 Firmware Activation Notices: Not Supported 00:22:54.347 ANA Change Notices: Not Supported 00:22:54.347 PLE Aggregate Log Change Notices: Not Supported 00:22:54.347 LBA Status Info Alert Notices: Not Supported 00:22:54.347 EGE Aggregate Log Change Notices: Not Supported 00:22:54.347 Normal NVM Subsystem Shutdown event: Not Supported 00:22:54.347 Zone Descriptor Change Notices: Not Supported 00:22:54.347 Discovery Log Change Notices: Not Supported 00:22:54.347 Controller Attributes 00:22:54.347 128-bit Host Identifier: Supported 00:22:54.347 Non-Operational Permissive Mode: Not Supported 00:22:54.347 NVM Sets: Not Supported 00:22:54.347 Read Recovery Levels: Not Supported 00:22:54.347 Endurance Groups: Not Supported 00:22:54.347 Predictable Latency Mode: Not Supported 00:22:54.347 Traffic Based Keep ALive: Not Supported 00:22:54.347 Namespace Granularity: Not Supported 00:22:54.347 SQ Associations: Not Supported 00:22:54.347 UUID List: Not Supported 00:22:54.347 Multi-Domain Subsystem: Not Supported 00:22:54.347 Fixed Capacity Management: Not Supported 00:22:54.347 Variable Capacity Management: Not Supported 00:22:54.347 Delete Endurance Group: Not Supported 00:22:54.347 Delete NVM Set: Not Supported 00:22:54.347 Extended LBA Formats Supported: Not Supported 00:22:54.347 Flexible Data Placement Supported: Not Supported 00:22:54.347 00:22:54.347 Controller Memory Buffer Support 00:22:54.347 ================================ 00:22:54.347 Supported: No 00:22:54.347 00:22:54.347 Persistent Memory Region Support 00:22:54.347 ================================ 00:22:54.347 Supported: No 00:22:54.347 00:22:54.347 Admin Command Set Attributes 00:22:54.347 ============================ 00:22:54.347 Security Send/Receive: Not Supported 00:22:54.347 Format NVM: Not Supported 00:22:54.347 Firmware Activate/Download: Not Supported 00:22:54.347 Namespace Management: Not Supported 00:22:54.347 Device Self-Test: Not Supported 00:22:54.347 Directives: Not Supported 00:22:54.347 NVMe-MI: Not Supported 00:22:54.347 Virtualization Management: Not Supported 00:22:54.347 Doorbell Buffer Config: Not Supported 00:22:54.347 Get LBA Status Capability: Not Supported 00:22:54.347 Command & Feature Lockdown Capability: Not Supported 00:22:54.347 Abort Command Limit: 4 00:22:54.347 Async Event Request Limit: 4 00:22:54.347 Number of Firmware Slots: N/A 00:22:54.347 Firmware Slot 1 Read-Only: N/A 00:22:54.347 Firmware Activation Without Reset: N/A 00:22:54.347 Multiple Update Detection Support: N/A 00:22:54.347 Firmware Update Granularity: No Information Provided 00:22:54.347 Per-Namespace SMART Log: No 00:22:54.347 Asymmetric Namespace Access Log Page: Not Supported 00:22:54.347 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:54.347 Command Effects Log Page: Supported 00:22:54.347 Get Log Page Extended Data: Supported 00:22:54.347 Telemetry Log Pages: Not Supported 00:22:54.347 Persistent Event Log Pages: Not Supported 00:22:54.347 Supported Log Pages Log Page: May Support 00:22:54.347 Commands Supported & Effects Log Page: Not Supported 00:22:54.347 Feature Identifiers & Effects Log Page:May Support 00:22:54.347 NVMe-MI Commands & Effects Log Page: May Support 00:22:54.347 Data Area 4 for Telemetry Log: Not Supported 00:22:54.347 Error Log Page Entries Supported: 128 00:22:54.347 Keep Alive: Supported 00:22:54.347 Keep Alive Granularity: 10000 ms 00:22:54.347 00:22:54.347 NVM Command Set Attributes 00:22:54.347 ========================== 00:22:54.347 Submission Queue Entry Size 00:22:54.347 Max: 64 00:22:54.347 Min: 64 00:22:54.347 Completion Queue Entry Size 00:22:54.347 Max: 16 00:22:54.347 Min: 16 00:22:54.347 Number of Namespaces: 32 00:22:54.347 Compare Command: Supported 00:22:54.347 Write Uncorrectable Command: Not Supported 00:22:54.347 Dataset Management Command: Supported 00:22:54.347 Write Zeroes Command: Supported 00:22:54.347 Set Features Save Field: Not Supported 00:22:54.347 Reservations: Supported 00:22:54.347 Timestamp: Not Supported 00:22:54.347 Copy: Supported 00:22:54.347 Volatile Write Cache: Present 00:22:54.347 Atomic Write Unit (Normal): 1 00:22:54.347 Atomic Write Unit (PFail): 1 00:22:54.347 Atomic Compare & Write Unit: 1 00:22:54.347 Fused Compare & Write: Supported 00:22:54.347 Scatter-Gather List 00:22:54.347 SGL Command Set: Supported 00:22:54.347 SGL Keyed: Supported 00:22:54.347 SGL Bit Bucket Descriptor: Not Supported 00:22:54.347 SGL Metadata Pointer: Not Supported 00:22:54.347 Oversized SGL: Not Supported 00:22:54.347 SGL Metadata Address: Not Supported 00:22:54.347 SGL Offset: Supported 00:22:54.347 Transport SGL Data Block: Not Supported 00:22:54.347 Replay Protected Memory Block: Not Supported 00:22:54.347 00:22:54.347 Firmware Slot Information 00:22:54.347 ========================= 00:22:54.347 Active slot: 1 00:22:54.347 Slot 1 Firmware Revision: 25.01 00:22:54.347 00:22:54.347 00:22:54.347 Commands Supported and Effects 00:22:54.347 ============================== 00:22:54.347 Admin Commands 00:22:54.347 -------------- 00:22:54.347 Get Log Page (02h): Supported 00:22:54.347 Identify (06h): Supported 00:22:54.347 Abort (08h): Supported 00:22:54.347 Set Features (09h): Supported 00:22:54.347 Get Features (0Ah): Supported 00:22:54.347 Asynchronous Event Request (0Ch): Supported 00:22:54.347 Keep Alive (18h): Supported 00:22:54.347 I/O Commands 00:22:54.347 ------------ 00:22:54.347 Flush (00h): Supported LBA-Change 00:22:54.347 Write (01h): Supported LBA-Change 00:22:54.347 Read (02h): Supported 00:22:54.347 Compare (05h): Supported 00:22:54.347 Write Zeroes (08h): Supported LBA-Change 00:22:54.347 Dataset Management (09h): Supported LBA-Change 00:22:54.347 Copy (19h): Supported LBA-Change 00:22:54.347 00:22:54.347 Error Log 00:22:54.347 ========= 00:22:54.347 00:22:54.347 Arbitration 00:22:54.347 =========== 00:22:54.347 Arbitration Burst: 1 00:22:54.347 00:22:54.347 Power Management 00:22:54.347 ================ 00:22:54.347 Number of Power States: 1 00:22:54.347 Current Power State: Power State #0 00:22:54.347 Power State #0: 00:22:54.347 Max Power: 0.00 W 00:22:54.347 Non-Operational State: Operational 00:22:54.347 Entry Latency: Not Reported 00:22:54.347 Exit Latency: Not Reported 00:22:54.347 Relative Read Throughput: 0 00:22:54.347 Relative Read Latency: 0 00:22:54.347 Relative Write Throughput: 0 00:22:54.347 Relative Write Latency: 0 00:22:54.347 Idle Power: Not Reported 00:22:54.347 Active Power: Not Reported 00:22:54.347 Non-Operational Permissive Mode: Not Supported 00:22:54.347 00:22:54.347 Health Information 00:22:54.347 ================== 00:22:54.347 Critical Warnings: 00:22:54.347 Available Spare Space: OK 00:22:54.347 Temperature: OK 00:22:54.347 Device Reliability: OK 00:22:54.347 Read Only: No 00:22:54.347 Volatile Memory Backup: OK 00:22:54.347 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:54.347 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:54.347 Available Spare: 0% 00:22:54.347 Available Spare Threshold: 0% 00:22:54.347 Life Percentage Used:[2024-12-10 04:09:53.560357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.347 [2024-12-10 04:09:53.560362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13c9690) 00:22:54.347 [2024-12-10 04:09:53.560367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.347 [2024-12-10 04:09:53.560379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142bb80, cid 7, qid 0 00:22:54.347 [2024-12-10 04:09:53.560461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.347 [2024-12-10 04:09:53.560467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.347 [2024-12-10 04:09:53.560471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.347 [2024-12-10 04:09:53.560474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142bb80) on tqpair=0x13c9690 00:22:54.347 [2024-12-10 04:09:53.560503] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:54.347 [2024-12-10 04:09:53.560512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b100) on tqpair=0x13c9690 00:22:54.347 [2024-12-10 04:09:53.560517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.347 [2024-12-10 04:09:53.560522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b280) on tqpair=0x13c9690 00:22:54.347 [2024-12-10 04:09:53.560525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.348 [2024-12-10 04:09:53.560529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b400) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.560533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.348 [2024-12-10 04:09:53.560537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.560541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.348 [2024-12-10 04:09:53.560548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.560551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.560554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.348 [2024-12-10 04:09:53.560559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.348 [2024-12-10 04:09:53.560571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.348 [2024-12-10 04:09:53.564173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.348 [2024-12-10 04:09:53.564182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.348 [2024-12-10 04:09:53.564185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.564194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.348 [2024-12-10 04:09:53.564206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.348 [2024-12-10 04:09:53.564220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.348 [2024-12-10 04:09:53.564389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.348 [2024-12-10 04:09:53.564395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.348 [2024-12-10 04:09:53.564398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.564405] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:54.348 [2024-12-10 04:09:53.564409] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:54.348 [2024-12-10 04:09:53.564416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.348 [2024-12-10 04:09:53.564432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.348 [2024-12-10 04:09:53.564442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.348 [2024-12-10 04:09:53.564508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.348 [2024-12-10 04:09:53.564513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.348 [2024-12-10 04:09:53.564516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.564528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.348 [2024-12-10 04:09:53.564540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.348 [2024-12-10 04:09:53.564549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.348 [2024-12-10 04:09:53.564608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.348 [2024-12-10 04:09:53.564613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.348 [2024-12-10 04:09:53.564616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.564627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.348 [2024-12-10 04:09:53.564639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.348 [2024-12-10 04:09:53.564648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.348 [2024-12-10 04:09:53.564706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.348 [2024-12-10 04:09:53.564712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.348 [2024-12-10 04:09:53.564715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.564726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.348 [2024-12-10 04:09:53.564738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.348 [2024-12-10 04:09:53.564748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.348 [2024-12-10 04:09:53.564804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.348 [2024-12-10 04:09:53.564810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.348 [2024-12-10 04:09:53.564813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.564824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.348 [2024-12-10 04:09:53.564836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.348 [2024-12-10 04:09:53.564847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.348 [2024-12-10 04:09:53.564901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.348 [2024-12-10 04:09:53.564907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.348 [2024-12-10 04:09:53.564910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.564921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.564927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.348 [2024-12-10 04:09:53.564933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.348 [2024-12-10 04:09:53.564942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.348 [2024-12-10 04:09:53.565001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.348 [2024-12-10 04:09:53.565007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.348 [2024-12-10 04:09:53.565010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.565013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.565021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.565024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.565027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.348 [2024-12-10 04:09:53.565033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.348 [2024-12-10 04:09:53.565042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.348 [2024-12-10 04:09:53.565099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.348 [2024-12-10 04:09:53.565104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.348 [2024-12-10 04:09:53.565107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.565111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.565118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.565122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.565125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.348 [2024-12-10 04:09:53.565130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.348 [2024-12-10 04:09:53.565139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.348 [2024-12-10 04:09:53.565200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.348 [2024-12-10 04:09:53.565206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.348 [2024-12-10 04:09:53.565210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.565213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.348 [2024-12-10 04:09:53.565220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.565224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.348 [2024-12-10 04:09:53.565227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.565232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.565243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.565318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.565323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.565326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.565338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.565349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.565358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.565416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.565422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.565425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.565436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.565448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.565456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.565518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.565524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.565527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.565538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.565550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.565559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.565613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.565619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.565622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.565633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.565645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.565654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.565712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.565718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.565721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.565732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.565744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.565752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.565813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.565819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.565822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.565833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.565845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.565854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.565913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.565918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.565921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.565932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.565939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.565944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.565953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.566030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.566036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.566039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.566050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.566062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.566072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.566147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.566154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.566158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.566173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.566185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.566194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.566265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.566271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.566274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.566285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.566297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.566306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.566384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.566390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.566393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.566405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.566416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.566425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.349 [2024-12-10 04:09:53.566499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.349 [2024-12-10 04:09:53.566504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.349 [2024-12-10 04:09:53.566507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.349 [2024-12-10 04:09:53.566518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.349 [2024-12-10 04:09:53.566525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.349 [2024-12-10 04:09:53.566530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.349 [2024-12-10 04:09:53.566540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.350 [2024-12-10 04:09:53.566599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.350 [2024-12-10 04:09:53.566605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.350 [2024-12-10 04:09:53.566609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.350 [2024-12-10 04:09:53.566620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.350 [2024-12-10 04:09:53.566632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.350 [2024-12-10 04:09:53.566641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.350 [2024-12-10 04:09:53.566698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.350 [2024-12-10 04:09:53.566703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.350 [2024-12-10 04:09:53.566706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.350 [2024-12-10 04:09:53.566717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.350 [2024-12-10 04:09:53.566729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.350 [2024-12-10 04:09:53.566738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.350 [2024-12-10 04:09:53.566797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.350 [2024-12-10 04:09:53.566802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.350 [2024-12-10 04:09:53.566805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.350 [2024-12-10 04:09:53.566817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.350 [2024-12-10 04:09:53.566828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.350 [2024-12-10 04:09:53.566838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.350 [2024-12-10 04:09:53.566895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.350 [2024-12-10 04:09:53.566900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.350 [2024-12-10 04:09:53.566903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.350 [2024-12-10 04:09:53.566914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.566920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.350 [2024-12-10 04:09:53.566926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.350 [2024-12-10 04:09:53.566935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.350 [2024-12-10 04:09:53.566991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.350 [2024-12-10 04:09:53.566996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.350 [2024-12-10 04:09:53.566999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.567004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.350 [2024-12-10 04:09:53.567012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.567016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.567019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.350 [2024-12-10 04:09:53.567024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.350 [2024-12-10 04:09:53.567034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.350 [2024-12-10 04:09:53.567109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.350 [2024-12-10 04:09:53.567115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.350 [2024-12-10 04:09:53.567117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.567121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.350 [2024-12-10 04:09:53.567129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.567132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.567135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.350 [2024-12-10 04:09:53.567140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.350 [2024-12-10 04:09:53.567149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.350 [2024-12-10 04:09:53.571173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.350 [2024-12-10 04:09:53.571181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.350 [2024-12-10 04:09:53.571184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.571187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.350 [2024-12-10 04:09:53.571197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.571200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.571203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c9690) 00:22:54.350 [2024-12-10 04:09:53.571209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.350 [2024-12-10 04:09:53.571220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b580, cid 3, qid 0 00:22:54.350 [2024-12-10 04:09:53.571390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.350 [2024-12-10 04:09:53.571395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.350 [2024-12-10 04:09:53.571398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.350 [2024-12-10 04:09:53.571401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x142b580) on tqpair=0x13c9690 00:22:54.350 [2024-12-10 04:09:53.571409] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:22:54.350 0% 00:22:54.350 Data Units Read: 0 00:22:54.350 Data Units Written: 0 00:22:54.350 Host Read Commands: 0 00:22:54.350 Host Write Commands: 0 00:22:54.350 Controller Busy Time: 0 minutes 00:22:54.350 Power Cycles: 0 00:22:54.350 Power On Hours: 0 hours 00:22:54.350 Unsafe Shutdowns: 0 00:22:54.350 Unrecoverable Media Errors: 0 00:22:54.350 Lifetime Error Log Entries: 0 00:22:54.350 Warning Temperature Time: 0 minutes 00:22:54.350 Critical Temperature Time: 0 minutes 00:22:54.350 00:22:54.350 Number of Queues 00:22:54.350 ================ 00:22:54.350 Number of I/O Submission Queues: 127 00:22:54.350 Number of I/O Completion Queues: 127 00:22:54.350 00:22:54.350 Active Namespaces 00:22:54.350 ================= 00:22:54.350 Namespace ID:1 00:22:54.350 Error Recovery Timeout: Unlimited 00:22:54.350 Command Set Identifier: NVM (00h) 00:22:54.350 Deallocate: Supported 00:22:54.350 Deallocated/Unwritten Error: Not Supported 00:22:54.350 Deallocated Read Value: Unknown 00:22:54.350 Deallocate in Write Zeroes: Not Supported 00:22:54.350 Deallocated Guard Field: 0xFFFF 00:22:54.350 Flush: Supported 00:22:54.350 Reservation: Supported 00:22:54.350 Namespace Sharing Capabilities: Multiple Controllers 00:22:54.350 Size (in LBAs): 131072 (0GiB) 00:22:54.350 Capacity (in LBAs): 131072 (0GiB) 00:22:54.350 Utilization (in LBAs): 131072 (0GiB) 00:22:54.350 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:54.350 EUI64: ABCDEF0123456789 00:22:54.350 UUID: 22264476-f08d-4754-95cd-592927b01571 00:22:54.350 Thin Provisioning: Not Supported 00:22:54.350 Per-NS Atomic Units: Yes 00:22:54.350 Atomic Boundary Size (Normal): 0 00:22:54.350 Atomic Boundary Size (PFail): 0 00:22:54.350 Atomic Boundary Offset: 0 00:22:54.350 Maximum Single Source Range Length: 65535 00:22:54.350 Maximum Copy Length: 65535 00:22:54.350 Maximum Source Range Count: 1 00:22:54.350 NGUID/EUI64 Never Reused: No 00:22:54.350 Namespace Write Protected: No 00:22:54.350 Number of LBA Formats: 1 00:22:54.350 Current LBA Format: LBA Format #00 00:22:54.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:54.350 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.350 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.350 rmmod nvme_tcp 00:22:54.350 rmmod nvme_fabrics 00:22:54.610 rmmod nvme_keyring 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 139031 ']' 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 139031 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 139031 ']' 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 139031 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 139031 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 139031' 00:22:54.610 killing process with pid 139031 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 139031 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 139031 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.610 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.869 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.869 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.869 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.869 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.869 04:09:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.773 04:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.773 00:22:56.773 real 0m9.914s 00:22:56.773 user 0m8.192s 00:22:56.773 sys 0m4.847s 00:22:56.773 04:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.773 04:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:56.773 ************************************ 00:22:56.773 END TEST nvmf_identify 00:22:56.773 ************************************ 00:22:56.773 04:09:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:56.773 04:09:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:56.773 04:09:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.773 04:09:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.773 ************************************ 00:22:56.773 START TEST nvmf_perf 00:22:56.773 ************************************ 00:22:56.773 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:57.033 * Looking for test storage... 00:22:57.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:57.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.033 --rc genhtml_branch_coverage=1 00:22:57.033 --rc genhtml_function_coverage=1 00:22:57.033 --rc genhtml_legend=1 00:22:57.033 --rc geninfo_all_blocks=1 00:22:57.033 --rc geninfo_unexecuted_blocks=1 00:22:57.033 00:22:57.033 ' 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:57.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.033 --rc genhtml_branch_coverage=1 00:22:57.033 --rc genhtml_function_coverage=1 00:22:57.033 --rc genhtml_legend=1 00:22:57.033 --rc geninfo_all_blocks=1 00:22:57.033 --rc geninfo_unexecuted_blocks=1 00:22:57.033 00:22:57.033 ' 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:57.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.033 --rc genhtml_branch_coverage=1 00:22:57.033 --rc genhtml_function_coverage=1 00:22:57.033 --rc genhtml_legend=1 00:22:57.033 --rc geninfo_all_blocks=1 00:22:57.033 --rc geninfo_unexecuted_blocks=1 00:22:57.033 00:22:57.033 ' 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:57.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.033 --rc genhtml_branch_coverage=1 00:22:57.033 --rc genhtml_function_coverage=1 00:22:57.033 --rc genhtml_legend=1 00:22:57.033 --rc geninfo_all_blocks=1 00:22:57.033 --rc geninfo_unexecuted_blocks=1 00:22:57.033 00:22:57.033 ' 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.033 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.034 04:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.600 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:03.601 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:03.601 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:03.601 Found net devices under 0000:af:00.0: cvl_0_0 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:03.601 Found net devices under 0000:af:00.1: cvl_0_1 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:03.601 04:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:03.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:23:03.601 00:23:03.601 --- 10.0.0.2 ping statistics --- 00:23:03.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.601 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:23:03.601 00:23:03.601 --- 10.0.0.1 ping statistics --- 00:23:03.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.601 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=142749 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 142749 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 142749 ']' 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.601 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:03.601 [2024-12-10 04:10:02.190644] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:23:03.601 [2024-12-10 04:10:02.190692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.601 [2024-12-10 04:10:02.268638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:03.601 [2024-12-10 04:10:02.308009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.601 [2024-12-10 04:10:02.308045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.601 [2024-12-10 04:10:02.308052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.601 [2024-12-10 04:10:02.308058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.602 [2024-12-10 04:10:02.308063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.602 [2024-12-10 04:10:02.309489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.602 [2024-12-10 04:10:02.309598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.602 [2024-12-10 04:10:02.309703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.602 [2024-12-10 04:10:02.309705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.602 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.602 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:03.602 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:03.602 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.602 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:03.602 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.602 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:03.602 04:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:06.887 04:10:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:06.887 04:10:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:06.887 04:10:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:06.887 04:10:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:06.887 04:10:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:06.887 04:10:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:06.887 04:10:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:06.887 04:10:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:06.887 04:10:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:06.887 [2024-12-10 04:10:06.085782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.887 04:10:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.145 04:10:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:07.145 04:10:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.404 04:10:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:07.404 04:10:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:07.662 04:10:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.662 [2024-12-10 04:10:06.914190] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.921 04:10:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:07.921 04:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:07.921 04:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:07.921 04:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:07.921 04:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:09.297 Initializing NVMe Controllers 00:23:09.297 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:09.297 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:09.297 Initialization complete. Launching workers. 00:23:09.297 ======================================================== 00:23:09.297 Latency(us) 00:23:09.297 Device Information : IOPS MiB/s Average min max 00:23:09.297 PCIE (0000:5e:00.0) NSID 1 from core 0: 99651.44 389.26 320.67 9.99 5193.10 00:23:09.297 ======================================================== 00:23:09.297 Total : 99651.44 389.26 320.67 9.99 5193.10 00:23:09.297 00:23:09.297 04:10:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:10.674 Initializing NVMe Controllers 00:23:10.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:10.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:10.674 Initialization complete. Launching workers. 00:23:10.674 ======================================================== 00:23:10.674 Latency(us) 00:23:10.674 Device Information : IOPS MiB/s Average min max 00:23:10.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 308.91 1.21 3338.27 117.01 44667.53 00:23:10.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.80 0.22 17906.45 7218.44 47909.25 00:23:10.674 ======================================================== 00:23:10.674 Total : 364.71 1.42 5567.28 117.01 47909.25 00:23:10.674 00:23:10.674 04:10:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:12.049 Initializing NVMe Controllers 00:23:12.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:12.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:12.049 Initialization complete. Launching workers. 00:23:12.049 ======================================================== 00:23:12.049 Latency(us) 00:23:12.049 Device Information : IOPS MiB/s Average min max 00:23:12.049 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11192.29 43.72 2857.11 452.31 9713.66 00:23:12.049 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3704.19 14.47 8656.65 4115.08 16502.65 00:23:12.049 ======================================================== 00:23:12.049 Total : 14896.48 58.19 4299.24 452.31 16502.65 00:23:12.049 00:23:12.049 04:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:12.049 04:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:12.049 04:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:14.583 Initializing NVMe Controllers 00:23:14.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.583 Controller IO queue size 128, less than required. 00:23:14.583 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.583 Controller IO queue size 128, less than required. 00:23:14.583 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:14.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:14.583 Initialization complete. Launching workers. 00:23:14.583 ======================================================== 00:23:14.583 Latency(us) 00:23:14.583 Device Information : IOPS MiB/s Average min max 00:23:14.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1807.96 451.99 72281.38 45714.86 119655.27 00:23:14.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 576.49 144.12 224917.06 76066.89 354364.34 00:23:14.583 ======================================================== 00:23:14.583 Total : 2384.45 596.11 109184.07 45714.86 354364.34 00:23:14.583 00:23:14.583 04:10:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:14.583 No valid NVMe controllers or AIO or URING devices found 00:23:14.583 Initializing NVMe Controllers 00:23:14.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.583 Controller IO queue size 128, less than required. 00:23:14.583 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.583 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:14.583 Controller IO queue size 128, less than required. 00:23:14.583 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.583 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:14.583 WARNING: Some requested NVMe devices were skipped 00:23:14.583 04:10:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:17.116 Initializing NVMe Controllers 00:23:17.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:17.116 Controller IO queue size 128, less than required. 00:23:17.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:17.116 Controller IO queue size 128, less than required. 00:23:17.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:17.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:17.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:17.116 Initialization complete. Launching workers. 00:23:17.116 00:23:17.116 ==================== 00:23:17.116 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:17.116 TCP transport: 00:23:17.116 polls: 12312 00:23:17.116 idle_polls: 8734 00:23:17.116 sock_completions: 3578 00:23:17.116 nvme_completions: 6233 00:23:17.116 submitted_requests: 9416 00:23:17.116 queued_requests: 1 00:23:17.116 00:23:17.116 ==================== 00:23:17.116 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:17.116 TCP transport: 00:23:17.116 polls: 16105 00:23:17.116 idle_polls: 11979 00:23:17.116 sock_completions: 4126 00:23:17.116 nvme_completions: 6573 00:23:17.116 submitted_requests: 9834 00:23:17.116 queued_requests: 1 00:23:17.116 ======================================================== 00:23:17.116 Latency(us) 00:23:17.116 Device Information : IOPS MiB/s Average min max 00:23:17.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1557.97 389.49 85106.86 46638.56 142789.12 00:23:17.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1642.97 410.74 78239.64 47522.13 129388.68 00:23:17.116 ======================================================== 00:23:17.116 Total : 3200.94 800.24 81582.07 46638.56 142789.12 00:23:17.116 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.116 rmmod nvme_tcp 00:23:17.116 rmmod nvme_fabrics 00:23:17.116 rmmod nvme_keyring 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 142749 ']' 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 142749 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 142749 ']' 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 142749 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.116 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 142749 00:23:17.375 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:17.375 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:17.375 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 142749' 00:23:17.375 killing process with pid 142749 00:23:17.375 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 142749 00:23:17.375 04:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 142749 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.751 04:10:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.286 04:10:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:21.286 00:23:21.286 real 0m23.912s 00:23:21.286 user 1m1.942s 00:23:21.286 sys 0m8.277s 00:23:21.286 04:10:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.286 04:10:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:21.286 ************************************ 00:23:21.286 END TEST nvmf_perf 00:23:21.286 ************************************ 00:23:21.286 04:10:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:21.286 04:10:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.286 04:10:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.286 04:10:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.286 ************************************ 00:23:21.286 START TEST nvmf_fio_host 00:23:21.286 ************************************ 00:23:21.286 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:21.286 * Looking for test storage... 00:23:21.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.286 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.286 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.286 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.286 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.287 --rc genhtml_branch_coverage=1 00:23:21.287 --rc genhtml_function_coverage=1 00:23:21.287 --rc genhtml_legend=1 00:23:21.287 --rc geninfo_all_blocks=1 00:23:21.287 --rc geninfo_unexecuted_blocks=1 00:23:21.287 00:23:21.287 ' 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.287 --rc genhtml_branch_coverage=1 00:23:21.287 --rc genhtml_function_coverage=1 00:23:21.287 --rc genhtml_legend=1 00:23:21.287 --rc geninfo_all_blocks=1 00:23:21.287 --rc geninfo_unexecuted_blocks=1 00:23:21.287 00:23:21.287 ' 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.287 --rc genhtml_branch_coverage=1 00:23:21.287 --rc genhtml_function_coverage=1 00:23:21.287 --rc genhtml_legend=1 00:23:21.287 --rc geninfo_all_blocks=1 00:23:21.287 --rc geninfo_unexecuted_blocks=1 00:23:21.287 00:23:21.287 ' 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.287 --rc genhtml_branch_coverage=1 00:23:21.287 --rc genhtml_function_coverage=1 00:23:21.287 --rc genhtml_legend=1 00:23:21.287 --rc geninfo_all_blocks=1 00:23:21.287 --rc geninfo_unexecuted_blocks=1 00:23:21.287 00:23:21.287 ' 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.287 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:21.288 04:10:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:26.634 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:26.635 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:26.635 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:26.635 Found net devices under 0000:af:00.0: cvl_0_0 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:26.635 Found net devices under 0000:af:00.1: cvl_0_1 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.635 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.895 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.895 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.895 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:26.895 04:10:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:26.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:23:26.895 00:23:26.895 --- 10.0.0.2 ping statistics --- 00:23:26.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.895 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:23:26.895 00:23:26.895 --- 10.0.0.1 ping statistics --- 00:23:26.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.895 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=148727 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 148727 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 148727 ']' 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.895 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.895 [2024-12-10 04:10:26.160663] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:23:26.895 [2024-12-10 04:10:26.160714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.154 [2024-12-10 04:10:26.239163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.154 [2024-12-10 04:10:26.280334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.154 [2024-12-10 04:10:26.280372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.154 [2024-12-10 04:10:26.280380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.154 [2024-12-10 04:10:26.280385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.154 [2024-12-10 04:10:26.280390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.154 [2024-12-10 04:10:26.281818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.154 [2024-12-10 04:10:26.281927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.154 [2024-12-10 04:10:26.282036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.154 [2024-12-10 04:10:26.282037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.154 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.154 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:27.154 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:27.413 [2024-12-10 04:10:26.555146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.413 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:27.413 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.413 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.413 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:27.672 Malloc1 00:23:27.672 04:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:27.931 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:28.189 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.189 [2024-12-10 04:10:27.394329] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.189 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:28.448 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:28.448 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:28.449 04:10:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:28.708 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:28.708 fio-3.35 00:23:28.708 Starting 1 thread 00:23:31.242 00:23:31.242 test: (groupid=0, jobs=1): err= 0: pid=149308: Tue Dec 10 04:10:30 2024 00:23:31.242 read: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(93.2MiB/2005msec) 00:23:31.242 slat (nsec): min=1510, max=263064, avg=1723.57, stdev=2361.21 00:23:31.242 clat (usec): min=3302, max=10527, avg=5934.63, stdev=486.16 00:23:31.242 lat (usec): min=3335, max=10528, avg=5936.35, stdev=486.17 00:23:31.242 clat percentiles (usec): 00:23:31.242 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:23:31.242 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:23:31.242 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:23:31.242 | 99.00th=[ 7177], 99.50th=[ 7570], 99.90th=[ 8455], 99.95th=[ 9241], 00:23:31.242 | 99.99th=[10421] 00:23:31.242 bw ( KiB/s): min=46816, max=48080, per=99.94%, avg=47588.00, stdev=541.39, samples=4 00:23:31.242 iops : min=11704, max=12020, avg=11897.00, stdev=135.35, samples=4 00:23:31.242 write: IOPS=11.8k, BW=46.3MiB/s (48.5MB/s)(92.8MiB/2005msec); 0 zone resets 00:23:31.242 slat (nsec): min=1548, max=233631, avg=1807.67, stdev=1710.30 00:23:31.242 clat (usec): min=2527, max=9168, avg=4806.72, stdev=391.44 00:23:31.242 lat (usec): min=2543, max=9169, avg=4808.53, stdev=391.51 00:23:31.242 clat percentiles (usec): 00:23:31.242 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:23:31.242 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:23:31.242 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5407], 00:23:31.242 | 99.00th=[ 5735], 99.50th=[ 6194], 99.90th=[ 7177], 99.95th=[ 8160], 00:23:31.242 | 99.99th=[ 9110] 00:23:31.242 bw ( KiB/s): min=46784, max=47872, per=100.00%, avg=47410.00, stdev=456.79, samples=4 00:23:31.242 iops : min=11696, max=11968, avg=11852.50, stdev=114.20, samples=4 00:23:31.242 lat (msec) : 4=0.85%, 10=99.13%, 20=0.01% 00:23:31.242 cpu : usr=75.15%, sys=24.10%, ctx=60, majf=0, minf=2 00:23:31.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:31.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:31.242 issued rwts: total=23868,23758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:31.242 00:23:31.242 Run status group 0 (all jobs): 00:23:31.242 READ: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=93.2MiB (97.8MB), run=2005-2005msec 00:23:31.242 WRITE: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=92.8MiB (97.3MB), run=2005-2005msec 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:31.242 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:31.243 04:10:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:31.501 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:31.501 fio-3.35 00:23:31.501 Starting 1 thread 00:23:34.036 00:23:34.036 test: (groupid=0, jobs=1): err= 0: pid=149866: Tue Dec 10 04:10:32 2024 00:23:34.036 read: IOPS=11.0k, BW=172MiB/s (181MB/s)(346MiB/2005msec) 00:23:34.036 slat (nsec): min=2462, max=92481, avg=2789.13, stdev=1296.45 00:23:34.036 clat (usec): min=1432, max=13640, avg=6700.28, stdev=1618.92 00:23:34.036 lat (usec): min=1435, max=13655, avg=6703.07, stdev=1619.07 00:23:34.036 clat percentiles (usec): 00:23:34.037 | 1.00th=[ 3523], 5.00th=[ 4228], 10.00th=[ 4621], 20.00th=[ 5276], 00:23:34.037 | 30.00th=[ 5735], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:23:34.037 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9503], 00:23:34.037 | 99.00th=[11076], 99.50th=[11469], 99.90th=[12387], 99.95th=[13042], 00:23:34.037 | 99.99th=[13566] 00:23:34.037 bw ( KiB/s): min=82528, max=95872, per=50.78%, avg=89672.00, stdev=5797.59, samples=4 00:23:34.037 iops : min= 5158, max= 5992, avg=5604.50, stdev=362.35, samples=4 00:23:34.037 write: IOPS=6543, BW=102MiB/s (107MB/s)(183MiB/1790msec); 0 zone resets 00:23:34.037 slat (usec): min=29, max=387, avg=31.35, stdev= 7.73 00:23:34.037 clat (usec): min=2171, max=15105, avg=8576.74, stdev=1518.85 00:23:34.037 lat (usec): min=2200, max=15216, avg=8608.08, stdev=1520.65 00:23:34.037 clat percentiles (usec): 00:23:34.037 | 1.00th=[ 5604], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7242], 00:23:34.037 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:34.037 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11207], 00:23:34.037 | 99.00th=[12256], 99.50th=[13042], 99.90th=[14877], 99.95th=[14877], 00:23:34.037 | 99.99th=[15008] 00:23:34.037 bw ( KiB/s): min=87168, max=99712, per=89.40%, avg=93600.00, stdev=5671.32, samples=4 00:23:34.037 iops : min= 5448, max= 6232, avg=5850.00, stdev=354.46, samples=4 00:23:34.037 lat (msec) : 2=0.02%, 4=2.15%, 10=89.49%, 20=8.34% 00:23:34.037 cpu : usr=84.13%, sys=14.92%, ctx=47, majf=0, minf=2 00:23:34.037 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:34.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:34.037 issued rwts: total=22129,11713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.037 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:34.037 00:23:34.037 Run status group 0 (all jobs): 00:23:34.037 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=346MiB (363MB), run=2005-2005msec 00:23:34.037 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=183MiB (192MB), run=1790-1790msec 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:34.037 rmmod nvme_tcp 00:23:34.037 rmmod nvme_fabrics 00:23:34.037 rmmod nvme_keyring 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 148727 ']' 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 148727 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 148727 ']' 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 148727 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.037 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 148727 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 148727' 00:23:34.297 killing process with pid 148727 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 148727 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 148727 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.297 04:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:36.832 00:23:36.832 real 0m15.601s 00:23:36.832 user 0m45.414s 00:23:36.832 sys 0m6.485s 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.832 ************************************ 00:23:36.832 END TEST nvmf_fio_host 00:23:36.832 ************************************ 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.832 ************************************ 00:23:36.832 START TEST nvmf_failover 00:23:36.832 ************************************ 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:36.832 * Looking for test storage... 00:23:36.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.832 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:36.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.833 --rc genhtml_branch_coverage=1 00:23:36.833 --rc genhtml_function_coverage=1 00:23:36.833 --rc genhtml_legend=1 00:23:36.833 --rc geninfo_all_blocks=1 00:23:36.833 --rc geninfo_unexecuted_blocks=1 00:23:36.833 00:23:36.833 ' 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:36.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.833 --rc genhtml_branch_coverage=1 00:23:36.833 --rc genhtml_function_coverage=1 00:23:36.833 --rc genhtml_legend=1 00:23:36.833 --rc geninfo_all_blocks=1 00:23:36.833 --rc geninfo_unexecuted_blocks=1 00:23:36.833 00:23:36.833 ' 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:36.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.833 --rc genhtml_branch_coverage=1 00:23:36.833 --rc genhtml_function_coverage=1 00:23:36.833 --rc genhtml_legend=1 00:23:36.833 --rc geninfo_all_blocks=1 00:23:36.833 --rc geninfo_unexecuted_blocks=1 00:23:36.833 00:23:36.833 ' 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:36.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.833 --rc genhtml_branch_coverage=1 00:23:36.833 --rc genhtml_function_coverage=1 00:23:36.833 --rc genhtml_legend=1 00:23:36.833 --rc geninfo_all_blocks=1 00:23:36.833 --rc geninfo_unexecuted_blocks=1 00:23:36.833 00:23:36.833 ' 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.833 04:10:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:43.398 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:43.398 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:43.398 Found net devices under 0000:af:00.0: cvl_0_0 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:43.398 Found net devices under 0000:af:00.1: cvl_0_1 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:43.398 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:43.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:23:43.399 00:23:43.399 --- 10.0.0.2 ping statistics --- 00:23:43.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.399 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:23:43.399 00:23:43.399 --- 10.0.0.1 ping statistics --- 00:23:43.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.399 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=153609 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 153609 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 153609 ']' 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.399 04:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:43.399 [2024-12-10 04:10:41.918884] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:23:43.399 [2024-12-10 04:10:41.918933] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.399 [2024-12-10 04:10:41.997762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:43.399 [2024-12-10 04:10:42.038956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.399 [2024-12-10 04:10:42.038990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.399 [2024-12-10 04:10:42.038997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.399 [2024-12-10 04:10:42.039003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.399 [2024-12-10 04:10:42.039009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.399 [2024-12-10 04:10:42.040267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.399 [2024-12-10 04:10:42.040375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.399 [2024-12-10 04:10:42.040377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.399 04:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.399 04:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:43.399 04:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:43.399 04:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.399 04:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:43.399 04:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.399 04:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:43.399 [2024-12-10 04:10:42.337066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.399 04:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:43.399 Malloc0 00:23:43.399 04:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:43.658 04:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:43.917 04:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.917 [2024-12-10 04:10:43.108965] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.917 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:44.176 [2024-12-10 04:10:43.313545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:44.176 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:44.434 [2024-12-10 04:10:43.522221] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:44.434 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=154018 00:23:44.434 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:44.434 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.434 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 154018 /var/tmp/bdevperf.sock 00:23:44.434 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 154018 ']' 00:23:44.434 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.434 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.434 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.434 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.434 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:44.693 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.693 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:44.693 04:10:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:44.951 NVMe0n1 00:23:44.951 04:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:45.210 00:23:45.468 04:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.468 04:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=154046 00:23:45.468 04:10:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:46.401 04:10:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.401 [2024-12-10 04:10:45.672263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.401 [2024-12-10 04:10:45.672386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54560 is same with the state(6) to be set 00:23:46.660 04:10:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:49.948 04:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:49.948 00:23:49.948 04:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:49.948 [2024-12-10 04:10:49.184741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 [2024-12-10 04:10:49.184909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d551c0 is same with the state(6) to be set 00:23:49.948 04:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:53.237 04:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.237 [2024-12-10 04:10:52.400897] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.237 04:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:54.174 04:10:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:54.433 [2024-12-10 04:10:53.633513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 [2024-12-10 04:10:53.633900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1710 is same with the state(6) to be set 00:23:54.434 04:10:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 154046 00:24:01.008 { 00:24:01.008 "results": [ 00:24:01.008 { 00:24:01.008 "job": "NVMe0n1", 00:24:01.008 "core_mask": "0x1", 00:24:01.008 "workload": "verify", 00:24:01.008 "status": "finished", 00:24:01.008 "verify_range": { 00:24:01.008 "start": 0, 00:24:01.008 "length": 16384 00:24:01.008 }, 00:24:01.008 "queue_depth": 128, 00:24:01.008 "io_size": 4096, 00:24:01.008 "runtime": 15.012684, 00:24:01.008 "iops": 11204.458842935746, 00:24:01.008 "mibps": 43.76741735521776, 00:24:01.008 "io_failed": 13765, 00:24:01.008 "io_timeout": 0, 00:24:01.008 "avg_latency_us": 10538.341284505846, 00:24:01.008 "min_latency_us": 460.312380952381, 00:24:01.008 "max_latency_us": 21470.841904761906 00:24:01.008 } 00:24:01.008 ], 00:24:01.008 "core_count": 1 00:24:01.008 } 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 154018 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 154018 ']' 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 154018 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154018 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154018' 00:24:01.008 killing process with pid 154018 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 154018 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 154018 00:24:01.008 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.008 [2024-12-10 04:10:43.598538] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:24:01.008 [2024-12-10 04:10:43.598589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154018 ] 00:24:01.008 [2024-12-10 04:10:43.657334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.008 [2024-12-10 04:10:43.699059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.008 Running I/O for 15 seconds... 00:24:01.008 11314.00 IOPS, 44.20 MiB/s [2024-12-10T03:11:00.294Z] [2024-12-10 04:10:45.672808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.008 [2024-12-10 04:10:45.672841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.672855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.672863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.672872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.672879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.672888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.672894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.672903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.672909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.672917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.672924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.672933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.672939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.672947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.672953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.672961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.672967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.672975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.672981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.672990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.672996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.008 [2024-12-10 04:10:45.673247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.008 [2024-12-10 04:10:45.673255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.009 [2024-12-10 04:10:45.673854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.009 [2024-12-10 04:10:45.673863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.673869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.673877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.673884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.673892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.673899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.673907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.673913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.673921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.673928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.673936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.673943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.673950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.673956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.673966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.673972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.673980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.673987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.673995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.010 [2024-12-10 04:10:45.674449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.010 [2024-12-10 04:10:45.674455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:45.674470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:45.674485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:45.674499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:45.674515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:45.674531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.011 [2024-12-10 04:10:45.674731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.011 [2024-12-10 04:10:45.674757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.011 [2024-12-10 04:10:45.674763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99840 len:8 PRP1 0x0 PRP2 0x0 00:24:01.011 [2024-12-10 04:10:45.674770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674816] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:01.011 [2024-12-10 04:10:45.674839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.011 [2024-12-10 04:10:45.674846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.011 [2024-12-10 04:10:45.674860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.011 [2024-12-10 04:10:45.674873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.011 [2024-12-10 04:10:45.674887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:45.674893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:01.011 [2024-12-10 04:10:45.677698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:01.011 [2024-12-10 04:10:45.677727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e15d0 (9): Bad file descriptor 00:24:01.011 [2024-12-10 04:10:45.700795] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:01.011 11181.50 IOPS, 43.68 MiB/s [2024-12-10T03:11:00.297Z] 11248.67 IOPS, 43.94 MiB/s [2024-12-10T03:11:00.297Z] 11314.75 IOPS, 44.20 MiB/s [2024-12-10T03:11:00.297Z] [2024-12-10 04:10:49.186926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.186960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.186975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.186983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.186997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.187004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.187013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.187020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.187028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.187035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.187043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.187050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.187058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.187065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.187074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.187081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.187090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.187097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.187107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.187117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.187126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.187135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.187145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.011 [2024-12-10 04:10:49.187153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.011 [2024-12-10 04:10:49.187163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.012 [2024-12-10 04:10:49.187177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.012 [2024-12-10 04:10:49.187194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.012 [2024-12-10 04:10:49.187215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.012 [2024-12-10 04:10:49.187230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.012 [2024-12-10 04:10:49.187247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.012 [2024-12-10 04:10:49.187264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.012 [2024-12-10 04:10:49.187279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.012 [2024-12-10 04:10:49.187293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.012 [2024-12-10 04:10:49.187308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.012 [2024-12-10 04:10:49.187324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.012 [2024-12-10 04:10:49.187339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.012 [2024-12-10 04:10:49.187686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.012 [2024-12-10 04:10:49.187694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.187984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.187994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.013 [2024-12-10 04:10:49.188324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.013 [2024-12-10 04:10:49.188332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.014 [2024-12-10 04:10:49.188339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35624 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35632 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35640 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35648 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35656 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35664 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35672 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35680 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35688 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35696 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35704 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35712 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35720 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35728 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35736 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35744 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35752 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35760 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35768 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35776 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35784 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35792 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35800 len:8 PRP1 0x0 PRP2 0x0 00:24:01.014 [2024-12-10 04:10:49.188915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.014 [2024-12-10 04:10:49.188921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.014 [2024-12-10 04:10:49.188927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.014 [2024-12-10 04:10:49.188932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35808 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.188939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.188948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.188953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.188958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35816 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.188966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.188973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.188978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.188984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35824 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.188990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.188996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.189001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.189007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35832 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.189013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.189020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.189025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.189030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35840 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.189036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.189043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.189048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.189053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35848 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.189061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.189067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.189072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.189078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35856 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.189084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.189090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.189096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.189101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35864 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.189108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.189114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.189119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.189124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35872 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.189131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.189138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.189143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.189151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35880 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.189157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.189164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.189174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.189179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35888 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.189185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.189192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.189196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.189202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35896 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.189208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.189215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.199856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.199867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35904 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.199877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.199887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.199893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.199900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35912 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.199909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.199917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.199924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.199931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35920 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.199939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.199947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.199953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.199960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35928 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.199968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.199977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.015 [2024-12-10 04:10:49.199983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.015 [2024-12-10 04:10:49.199990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35096 len:8 PRP1 0x0 PRP2 0x0 00:24:01.015 [2024-12-10 04:10:49.199998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.200052] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:01.015 [2024-12-10 04:10:49.200079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.015 [2024-12-10 04:10:49.200089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.200098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.015 [2024-12-10 04:10:49.200106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.200116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.015 [2024-12-10 04:10:49.200124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.200133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.015 [2024-12-10 04:10:49.200141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:49.200150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:01.015 [2024-12-10 04:10:49.200192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e15d0 (9): Bad file descriptor 00:24:01.015 [2024-12-10 04:10:49.203927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:01.015 [2024-12-10 04:10:49.274840] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:01.015 11163.40 IOPS, 43.61 MiB/s [2024-12-10T03:11:00.301Z] 11176.33 IOPS, 43.66 MiB/s [2024-12-10T03:11:00.301Z] 11185.71 IOPS, 43.69 MiB/s [2024-12-10T03:11:00.301Z] 11213.88 IOPS, 43.80 MiB/s [2024-12-10T03:11:00.301Z] 11239.67 IOPS, 43.90 MiB/s [2024-12-10T03:11:00.301Z] [2024-12-10 04:10:53.634320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.015 [2024-12-10 04:10:53.634351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:53.634365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.015 [2024-12-10 04:10:53.634373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:53.634381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.015 [2024-12-10 04:10:53.634388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.015 [2024-12-10 04:10:53.634397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.015 [2024-12-10 04:10:53.634403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.634983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.634993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.016 [2024-12-10 04:10:53.635219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.016 [2024-12-10 04:10:53.635225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.017 [2024-12-10 04:10:53.635239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.017 [2024-12-10 04:10:53.635254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.017 [2024-12-10 04:10:53.635268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.635986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.635994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.017 [2024-12-10 04:10:53.636188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.017 [2024-12-10 04:10:53.636230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.017 [2024-12-10 04:10:53.636236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64800 len:8 PRP1 0x0 PRP2 0x0 00:24:01.017 [2024-12-10 04:10:53.636244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636290] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:01.017 [2024-12-10 04:10:53.636312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.017 [2024-12-10 04:10:53.636319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.017 [2024-12-10 04:10:53.636332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.017 [2024-12-10 04:10:53.636346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.017 [2024-12-10 04:10:53.636361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.017 [2024-12-10 04:10:53.636367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:01.017 [2024-12-10 04:10:53.636398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e15d0 (9): Bad file descriptor 00:24:01.017 [2024-12-10 04:10:53.639316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:01.017 [2024-12-10 04:10:53.823713] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:01.017 11053.80 IOPS, 43.18 MiB/s [2024-12-10T03:11:00.303Z] 11084.55 IOPS, 43.30 MiB/s [2024-12-10T03:11:00.303Z] 11118.50 IOPS, 43.43 MiB/s [2024-12-10T03:11:00.303Z] 11159.69 IOPS, 43.59 MiB/s [2024-12-10T03:11:00.303Z] 11184.71 IOPS, 43.69 MiB/s [2024-12-10T03:11:00.303Z] 11205.47 IOPS, 43.77 MiB/s 00:24:01.017 Latency(us) 00:24:01.017 [2024-12-10T03:11:00.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.017 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:01.017 Verification LBA range: start 0x0 length 0x4000 00:24:01.017 NVMe0n1 : 15.01 11204.46 43.77 916.89 0.00 10538.34 460.31 21470.84 00:24:01.017 [2024-12-10T03:11:00.303Z] =================================================================================================================== 00:24:01.017 [2024-12-10T03:11:00.304Z] Total : 11204.46 43.77 916.89 0.00 10538.34 460.31 21470.84 00:24:01.018 Received shutdown signal, test time was about 15.000000 seconds 00:24:01.018 00:24:01.018 Latency(us) 00:24:01.018 [2024-12-10T03:11:00.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.018 [2024-12-10T03:11:00.304Z] =================================================================================================================== 00:24:01.018 [2024-12-10T03:11:00.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=156498 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 156498 /var/tmp/bdevperf.sock 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 156498 ']' 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.018 04:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.018 04:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.018 04:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:01.018 04:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:01.276 [2024-12-10 04:11:00.335128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:01.276 04:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:01.276 [2024-12-10 04:11:00.539744] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:01.535 04:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:01.793 NVMe0n1 00:24:01.793 04:11:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:02.051 00:24:02.051 04:11:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:02.309 00:24:02.309 04:11:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.309 04:11:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:02.568 04:11:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.827 04:11:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:06.114 04:11:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:06.114 04:11:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:06.114 04:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:06.114 04:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=157396 00:24:06.114 04:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 157396 00:24:07.050 { 00:24:07.050 "results": [ 00:24:07.050 { 00:24:07.050 "job": "NVMe0n1", 00:24:07.050 "core_mask": "0x1", 00:24:07.050 "workload": "verify", 00:24:07.050 "status": "finished", 00:24:07.050 "verify_range": { 00:24:07.050 "start": 0, 00:24:07.050 "length": 16384 00:24:07.050 }, 00:24:07.050 "queue_depth": 128, 00:24:07.050 "io_size": 4096, 00:24:07.050 "runtime": 1.00885, 00:24:07.050 "iops": 11282.152946424147, 00:24:07.050 "mibps": 44.07090994696932, 00:24:07.050 "io_failed": 0, 00:24:07.050 "io_timeout": 0, 00:24:07.050 "avg_latency_us": 11303.971224071425, 00:24:07.050 "min_latency_us": 1357.5314285714285, 00:24:07.050 "max_latency_us": 15229.318095238095 00:24:07.050 } 00:24:07.050 ], 00:24:07.050 "core_count": 1 00:24:07.050 } 00:24:07.050 04:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:07.050 [2024-12-10 04:10:59.924996] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:24:07.050 [2024-12-10 04:10:59.925049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156498 ] 00:24:07.050 [2024-12-10 04:10:59.998587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.050 [2024-12-10 04:11:00.050330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.050 [2024-12-10 04:11:01.872544] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:07.050 [2024-12-10 04:11:01.872586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.050 [2024-12-10 04:11:01.872598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.050 [2024-12-10 04:11:01.872607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.050 [2024-12-10 04:11:01.872614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.050 [2024-12-10 04:11:01.872622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.050 [2024-12-10 04:11:01.872628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.050 [2024-12-10 04:11:01.872636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.050 [2024-12-10 04:11:01.872642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.050 [2024-12-10 04:11:01.872649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:07.050 [2024-12-10 04:11:01.872674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:07.050 [2024-12-10 04:11:01.872688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fca5d0 (9): Bad file descriptor 00:24:07.050 [2024-12-10 04:11:01.918408] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:07.050 Running I/O for 1 seconds... 00:24:07.050 11247.00 IOPS, 43.93 MiB/s 00:24:07.050 Latency(us) 00:24:07.050 [2024-12-10T03:11:06.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.050 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:07.050 Verification LBA range: start 0x0 length 0x4000 00:24:07.050 NVMe0n1 : 1.01 11282.15 44.07 0.00 0.00 11303.97 1357.53 15229.32 00:24:07.050 [2024-12-10T03:11:06.336Z] =================================================================================================================== 00:24:07.050 [2024-12-10T03:11:06.336Z] Total : 11282.15 44.07 0.00 0.00 11303.97 1357.53 15229.32 00:24:07.050 04:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.050 04:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:07.308 04:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.566 04:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:07.566 04:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.566 04:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.824 04:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:11.111 04:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:11.111 04:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:11.111 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 156498 00:24:11.111 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 156498 ']' 00:24:11.111 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 156498 00:24:11.111 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:11.111 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.111 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 156498 00:24:11.111 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:11.111 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:11.111 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 156498' 00:24:11.111 killing process with pid 156498 00:24:11.111 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 156498 00:24:11.111 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 156498 00:24:11.370 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:11.370 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.370 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:11.370 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:11.370 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:11.370 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:11.370 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:11.370 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:11.370 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:11.370 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:11.370 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:11.370 rmmod nvme_tcp 00:24:11.629 rmmod nvme_fabrics 00:24:11.629 rmmod nvme_keyring 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 153609 ']' 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 153609 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 153609 ']' 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 153609 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153609 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153609' 00:24:11.629 killing process with pid 153609 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 153609 00:24:11.629 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 153609 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.888 04:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.795 04:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.795 00:24:13.795 real 0m37.291s 00:24:13.795 user 1m57.956s 00:24:13.795 sys 0m7.829s 00:24:13.795 04:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.795 04:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:13.795 ************************************ 00:24:13.795 END TEST nvmf_failover 00:24:13.795 ************************************ 00:24:13.795 04:11:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:13.795 04:11:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:13.795 04:11:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.795 04:11:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.795 ************************************ 00:24:13.795 START TEST nvmf_host_discovery 00:24:13.795 ************************************ 00:24:13.795 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:14.054 * Looking for test storage... 00:24:14.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.054 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:14.054 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:14.054 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:14.054 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:14.054 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:14.054 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.054 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.054 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.054 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:14.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.055 --rc genhtml_branch_coverage=1 00:24:14.055 --rc genhtml_function_coverage=1 00:24:14.055 --rc genhtml_legend=1 00:24:14.055 --rc geninfo_all_blocks=1 00:24:14.055 --rc geninfo_unexecuted_blocks=1 00:24:14.055 00:24:14.055 ' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:14.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.055 --rc genhtml_branch_coverage=1 00:24:14.055 --rc genhtml_function_coverage=1 00:24:14.055 --rc genhtml_legend=1 00:24:14.055 --rc geninfo_all_blocks=1 00:24:14.055 --rc geninfo_unexecuted_blocks=1 00:24:14.055 00:24:14.055 ' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:14.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.055 --rc genhtml_branch_coverage=1 00:24:14.055 --rc genhtml_function_coverage=1 00:24:14.055 --rc genhtml_legend=1 00:24:14.055 --rc geninfo_all_blocks=1 00:24:14.055 --rc geninfo_unexecuted_blocks=1 00:24:14.055 00:24:14.055 ' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:14.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.055 --rc genhtml_branch_coverage=1 00:24:14.055 --rc genhtml_function_coverage=1 00:24:14.055 --rc genhtml_legend=1 00:24:14.055 --rc geninfo_all_blocks=1 00:24:14.055 --rc geninfo_unexecuted_blocks=1 00:24:14.055 00:24:14.055 ' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:14.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.055 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:14.056 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:14.056 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:14.056 04:11:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:20.628 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:20.629 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:20.629 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:20.629 Found net devices under 0000:af:00.0: cvl_0_0 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:20.629 Found net devices under 0000:af:00.1: cvl_0_1 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.629 04:11:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:24:20.629 00:24:20.629 --- 10.0.0.2 ping statistics --- 00:24:20.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.629 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:24:20.629 00:24:20.629 --- 10.0.0.1 ping statistics --- 00:24:20.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.629 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=161758 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 161758 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 161758 ']' 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.629 [2024-12-10 04:11:19.198326] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:24:20.629 [2024-12-10 04:11:19.198377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.629 [2024-12-10 04:11:19.277631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.629 [2024-12-10 04:11:19.316991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.629 [2024-12-10 04:11:19.317026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.629 [2024-12-10 04:11:19.317034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.629 [2024-12-10 04:11:19.317040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.629 [2024-12-10 04:11:19.317045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.629 [2024-12-10 04:11:19.317541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.629 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 [2024-12-10 04:11:19.452818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 [2024-12-10 04:11:19.464994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 null0 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 null1 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=161780 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 161780 /tmp/host.sock 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 161780 ']' 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:20.630 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 [2024-12-10 04:11:19.540244] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:24:20.630 [2024-12-10 04:11:19.540283] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161780 ] 00:24:20.630 [2024-12-10 04:11:19.612129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.630 [2024-12-10 04:11:19.650826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:20.630 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:20.889 04:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.889 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:20.889 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:20.889 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.889 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.889 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.889 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.889 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.889 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.889 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.890 [2024-12-10 04:11:20.070558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.890 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:21.148 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:21.149 04:11:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:21.715 [2024-12-10 04:11:20.820672] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:21.716 [2024-12-10 04:11:20.820690] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:21.716 [2024-12-10 04:11:20.820702] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.716 [2024-12-10 04:11:20.948102] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:21.974 [2024-12-10 04:11:21.049770] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:21.974 [2024-12-10 04:11:21.050511] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x225cfa0:1 started. 00:24:21.974 [2024-12-10 04:11:21.051838] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:21.974 [2024-12-10 04:11:21.051854] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:21.974 [2024-12-10 04:11:21.059618] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x225cfa0 was disconnected and freed. delete nvme_qpair. 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.233 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:22.234 [2024-12-10 04:11:21.472294] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x225d320:1 started. 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.234 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.493 [2024-12-10 04:11:21.522746] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x225d320 was disconnected and freed. delete nvme_qpair. 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.493 [2024-12-10 04:11:21.566501] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:22.493 [2024-12-10 04:11:21.566626] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:22.493 [2024-12-10 04:11:21.566643] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:22.493 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.494 [2024-12-10 04:11:21.695016] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:22.494 04:11:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:22.494 [2024-12-10 04:11:21.755565] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:22.494 [2024-12-10 04:11:21.755598] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:22.494 [2024-12-10 04:11:21.755605] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:22.494 [2024-12-10 04:11:21.755610] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:23.430 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:23.431 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:23.431 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.691 [2024-12-10 04:11:22.810623] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:23.691 [2024-12-10 04:11:22.810644] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:23.691 [2024-12-10 04:11:22.814243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.691 [2024-12-10 04:11:22.814260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.691 [2024-12-10 04:11:22.814268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.691 [2024-12-10 04:11:22.814275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.691 [2024-12-10 04:11:22.814298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.691 [2024-12-10 04:11:22.814305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.691 [2024-12-10 04:11:22.814316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.691 [2024-12-10 04:11:22.814323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.691 [2024-12-10 04:11:22.814330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222d410 is same with the state(6) to be set 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:23.691 [2024-12-10 04:11:22.824258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222d410 (9): Bad file descriptor 00:24:23.691 [2024-12-10 04:11:22.834294] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:23.691 [2024-12-10 04:11:22.834307] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:23.691 [2024-12-10 04:11:22.834314] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:23.691 [2024-12-10 04:11:22.834319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:23.691 [2024-12-10 04:11:22.834336] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:23.691 [2024-12-10 04:11:22.834590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.691 [2024-12-10 04:11:22.834603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222d410 with addr=10.0.0.2, port=4420 00:24:23.691 [2024-12-10 04:11:22.834612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222d410 is same with the state(6) to be set 00:24:23.691 [2024-12-10 04:11:22.834624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222d410 (9): Bad file descriptor 00:24:23.691 [2024-12-10 04:11:22.834634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:23.691 [2024-12-10 04:11:22.834640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:23.691 [2024-12-10 04:11:22.834647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:23.691 [2024-12-10 04:11:22.834654] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:23.691 [2024-12-10 04:11:22.834659] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:23.691 [2024-12-10 04:11:22.834666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:23.691 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.691 [2024-12-10 04:11:22.844366] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:23.691 [2024-12-10 04:11:22.844377] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:23.691 [2024-12-10 04:11:22.844381] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:23.691 [2024-12-10 04:11:22.844385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:23.691 [2024-12-10 04:11:22.844398] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:23.691 [2024-12-10 04:11:22.844569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.691 [2024-12-10 04:11:22.844581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222d410 with addr=10.0.0.2, port=4420 00:24:23.691 [2024-12-10 04:11:22.844588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222d410 is same with the state(6) to be set 00:24:23.691 [2024-12-10 04:11:22.844598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222d410 (9): Bad file descriptor 00:24:23.691 [2024-12-10 04:11:22.844608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:23.691 [2024-12-10 04:11:22.844613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:23.691 [2024-12-10 04:11:22.844620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:23.691 [2024-12-10 04:11:22.844626] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:23.691 [2024-12-10 04:11:22.844630] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:23.691 [2024-12-10 04:11:22.844634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:23.691 [2024-12-10 04:11:22.854429] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:23.691 [2024-12-10 04:11:22.854441] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:23.691 [2024-12-10 04:11:22.854446] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:23.691 [2024-12-10 04:11:22.854449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:23.691 [2024-12-10 04:11:22.854463] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:23.691 [2024-12-10 04:11:22.854609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.692 [2024-12-10 04:11:22.854620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222d410 with addr=10.0.0.2, port=4420 00:24:23.692 [2024-12-10 04:11:22.854627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222d410 is same with the state(6) to be set 00:24:23.692 [2024-12-10 04:11:22.854637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222d410 (9): Bad file descriptor 00:24:23.692 [2024-12-10 04:11:22.854646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:23.692 [2024-12-10 04:11:22.854652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:23.692 [2024-12-10 04:11:22.854659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:23.692 [2024-12-10 04:11:22.854664] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:23.692 [2024-12-10 04:11:22.854672] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:23.692 [2024-12-10 04:11:22.854676] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:23.692 [2024-12-10 04:11:22.864494] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:23.692 [2024-12-10 04:11:22.864507] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:23.692 [2024-12-10 04:11:22.864511] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:23.692 [2024-12-10 04:11:22.864515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:23.692 [2024-12-10 04:11:22.864529] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:23.692 [2024-12-10 04:11:22.864684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.692 [2024-12-10 04:11:22.864695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222d410 with addr=10.0.0.2, port=4420 00:24:23.692 [2024-12-10 04:11:22.864702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222d410 is same with the state(6) to be set 00:24:23.692 [2024-12-10 04:11:22.864712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222d410 (9): Bad file descriptor 00:24:23.692 [2024-12-10 04:11:22.864722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:23.692 [2024-12-10 04:11:22.864728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:23.692 [2024-12-10 04:11:22.864735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:23.692 [2024-12-10 04:11:22.864740] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:23.692 [2024-12-10 04:11:22.864745] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:23.692 [2024-12-10 04:11:22.864749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:23.692 [2024-12-10 04:11:22.874560] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:23.692 [2024-12-10 04:11:22.874575] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:23.692 [2024-12-10 04:11:22.874580] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:23.692 [2024-12-10 04:11:22.874585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:23.692 [2024-12-10 04:11:22.874598] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:23.692 [2024-12-10 04:11:22.874876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.692 [2024-12-10 04:11:22.874889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222d410 with addr=10.0.0.2, port=4420 00:24:23.692 [2024-12-10 04:11:22.874896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222d410 is same with the state(6) to be set 00:24:23.692 [2024-12-10 04:11:22.874906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222d410 (9): Bad file descriptor 00:24:23.692 [2024-12-10 04:11:22.874921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:23.692 [2024-12-10 04:11:22.874928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:23.692 [2024-12-10 04:11:22.874934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:23.692 [2024-12-10 04:11:22.874939] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:23.692 [2024-12-10 04:11:22.874944] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:23.692 [2024-12-10 04:11:22.874948] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:23.692 [2024-12-10 04:11:22.884628] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:23.692 [2024-12-10 04:11:22.884641] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:23.692 [2024-12-10 04:11:22.884645] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:23.692 [2024-12-10 04:11:22.884650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:23.692 [2024-12-10 04:11:22.884663] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:23.692 [2024-12-10 04:11:22.884865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.692 [2024-12-10 04:11:22.884877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222d410 with addr=10.0.0.2, port=4420 00:24:23.692 [2024-12-10 04:11:22.884884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222d410 is same with the state(6) to be set 00:24:23.692 [2024-12-10 04:11:22.884895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222d410 (9): Bad file descriptor 00:24:23.692 [2024-12-10 04:11:22.884904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:23.692 [2024-12-10 04:11:22.884910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:23.692 [2024-12-10 04:11:22.884917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:23.692 [2024-12-10 04:11:22.884923] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:23.692 [2024-12-10 04:11:22.884927] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:23.692 [2024-12-10 04:11:22.884931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:23.692 [2024-12-10 04:11:22.894694] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:23.692 [2024-12-10 04:11:22.894710] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:23.692 [2024-12-10 04:11:22.894714] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:23.692 [2024-12-10 04:11:22.894719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:23.692 [2024-12-10 04:11:22.894733] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:23.692 [2024-12-10 04:11:22.894938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.692 [2024-12-10 04:11:22.894951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222d410 with addr=10.0.0.2, port=4420 00:24:23.692 [2024-12-10 04:11:22.894958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222d410 is same with the state(6) to be set 00:24:23.692 [2024-12-10 04:11:22.894969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222d410 (9): Bad file descriptor 00:24:23.692 [2024-12-10 04:11:22.894978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:23.692 [2024-12-10 04:11:22.894984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:23.692 [2024-12-10 04:11:22.894991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:23.692 [2024-12-10 04:11:22.894997] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:23.692 [2024-12-10 04:11:22.895001] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:23.692 [2024-12-10 04:11:22.895005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:23.692 [2024-12-10 04:11:22.897001] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:23.692 [2024-12-10 04:11:22.897017] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:23.692 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:23.952 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.952 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.952 04:11:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.952 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.330 [2024-12-10 04:11:24.188975] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:25.330 [2024-12-10 04:11:24.188990] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:25.330 [2024-12-10 04:11:24.189001] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:25.330 [2024-12-10 04:11:24.275259] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:25.330 [2024-12-10 04:11:24.373886] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:25.330 [2024-12-10 04:11:24.374449] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x222abe0:1 started. 00:24:25.330 [2024-12-10 04:11:24.375940] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:25.330 [2024-12-10 04:11:24.375963] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.330 request: 00:24:25.330 { 00:24:25.330 "name": "nvme", 00:24:25.330 "trtype": "tcp", 00:24:25.330 "traddr": "10.0.0.2", 00:24:25.330 "adrfam": "ipv4", 00:24:25.330 "trsvcid": "8009", 00:24:25.330 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:25.330 "wait_for_attach": true, 00:24:25.330 "method": "bdev_nvme_start_discovery", 00:24:25.330 "req_id": 1 00:24:25.330 } 00:24:25.330 Got JSON-RPC error response 00:24:25.330 response: 00:24:25.330 { 00:24:25.330 "code": -17, 00:24:25.330 "message": "File exists" 00:24:25.330 } 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:25.330 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.330 [2024-12-10 04:11:24.418764] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x222abe0 was disconnected and freed. delete nvme_qpair. 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.331 request: 00:24:25.331 { 00:24:25.331 "name": "nvme_second", 00:24:25.331 "trtype": "tcp", 00:24:25.331 "traddr": "10.0.0.2", 00:24:25.331 "adrfam": "ipv4", 00:24:25.331 "trsvcid": "8009", 00:24:25.331 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:25.331 "wait_for_attach": true, 00:24:25.331 "method": "bdev_nvme_start_discovery", 00:24:25.331 "req_id": 1 00:24:25.331 } 00:24:25.331 Got JSON-RPC error response 00:24:25.331 response: 00:24:25.331 { 00:24:25.331 "code": -17, 00:24:25.331 "message": "File exists" 00:24:25.331 } 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.331 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.708 [2024-12-10 04:11:25.615973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.708 [2024-12-10 04:11:25.615998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222a4f0 with addr=10.0.0.2, port=8010 00:24:26.708 [2024-12-10 04:11:25.616011] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:26.708 [2024-12-10 04:11:25.616017] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:26.708 [2024-12-10 04:11:25.616023] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:27.654 [2024-12-10 04:11:26.618410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.654 [2024-12-10 04:11:26.618433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222a4f0 with addr=10.0.0.2, port=8010 00:24:27.654 [2024-12-10 04:11:26.618443] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:27.654 [2024-12-10 04:11:26.618449] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:27.654 [2024-12-10 04:11:26.618455] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:28.702 [2024-12-10 04:11:27.620567] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:28.702 request: 00:24:28.702 { 00:24:28.702 "name": "nvme_second", 00:24:28.702 "trtype": "tcp", 00:24:28.702 "traddr": "10.0.0.2", 00:24:28.702 "adrfam": "ipv4", 00:24:28.702 "trsvcid": "8010", 00:24:28.702 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:28.702 "wait_for_attach": false, 00:24:28.702 "attach_timeout_ms": 3000, 00:24:28.702 "method": "bdev_nvme_start_discovery", 00:24:28.702 "req_id": 1 00:24:28.702 } 00:24:28.702 Got JSON-RPC error response 00:24:28.702 response: 00:24:28.702 { 00:24:28.702 "code": -110, 00:24:28.702 "message": "Connection timed out" 00:24:28.702 } 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:28.702 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 161780 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.703 rmmod nvme_tcp 00:24:28.703 rmmod nvme_fabrics 00:24:28.703 rmmod nvme_keyring 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 161758 ']' 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 161758 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 161758 ']' 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 161758 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161758 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161758' 00:24:28.703 killing process with pid 161758 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 161758 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 161758 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.703 04:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.240 00:24:31.240 real 0m16.952s 00:24:31.240 user 0m20.022s 00:24:31.240 sys 0m5.879s 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.240 ************************************ 00:24:31.240 END TEST nvmf_host_discovery 00:24:31.240 ************************************ 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.240 ************************************ 00:24:31.240 START TEST nvmf_host_multipath_status 00:24:31.240 ************************************ 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:31.240 * Looking for test storage... 00:24:31.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.240 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:31.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.241 --rc genhtml_branch_coverage=1 00:24:31.241 --rc genhtml_function_coverage=1 00:24:31.241 --rc genhtml_legend=1 00:24:31.241 --rc geninfo_all_blocks=1 00:24:31.241 --rc geninfo_unexecuted_blocks=1 00:24:31.241 00:24:31.241 ' 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:31.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.241 --rc genhtml_branch_coverage=1 00:24:31.241 --rc genhtml_function_coverage=1 00:24:31.241 --rc genhtml_legend=1 00:24:31.241 --rc geninfo_all_blocks=1 00:24:31.241 --rc geninfo_unexecuted_blocks=1 00:24:31.241 00:24:31.241 ' 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:31.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.241 --rc genhtml_branch_coverage=1 00:24:31.241 --rc genhtml_function_coverage=1 00:24:31.241 --rc genhtml_legend=1 00:24:31.241 --rc geninfo_all_blocks=1 00:24:31.241 --rc geninfo_unexecuted_blocks=1 00:24:31.241 00:24:31.241 ' 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:31.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.241 --rc genhtml_branch_coverage=1 00:24:31.241 --rc genhtml_function_coverage=1 00:24:31.241 --rc genhtml_legend=1 00:24:31.241 --rc geninfo_all_blocks=1 00:24:31.241 --rc geninfo_unexecuted_blocks=1 00:24:31.241 00:24:31.241 ' 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.241 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:37.812 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:37.812 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:37.812 Found net devices under 0000:af:00.0: cvl_0_0 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:37.812 Found net devices under 0000:af:00.1: cvl_0_1 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.812 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:37.813 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:37.813 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.813 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.813 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:37.813 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:37.813 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.813 04:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:37.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:24:37.813 00:24:37.813 --- 10.0.0.2 ping statistics --- 00:24:37.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.813 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:24:37.813 00:24:37.813 --- 10.0.0.1 ping statistics --- 00:24:37.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.813 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=166766 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 166766 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 166766 ']' 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:37.813 [2024-12-10 04:11:36.225713] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:24:37.813 [2024-12-10 04:11:36.225756] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.813 [2024-12-10 04:11:36.304693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:37.813 [2024-12-10 04:11:36.343987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.813 [2024-12-10 04:11:36.344022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.813 [2024-12-10 04:11:36.344029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.813 [2024-12-10 04:11:36.344035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.813 [2024-12-10 04:11:36.344040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.813 [2024-12-10 04:11:36.345125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.813 [2024-12-10 04:11:36.345126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=166766 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:37.813 [2024-12-10 04:11:36.645406] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:37.813 Malloc0 00:24:37.813 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:38.072 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.072 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.331 [2024-12-10 04:11:37.475295] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.331 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:38.590 [2024-12-10 04:11:37.671777] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:38.590 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:38.590 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=167016 00:24:38.590 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:38.590 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 167016 /var/tmp/bdevperf.sock 00:24:38.590 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 167016 ']' 00:24:38.590 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.590 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.590 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.590 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.590 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:38.848 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.848 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:38.848 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:39.107 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:39.366 Nvme0n1 00:24:39.366 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:39.624 Nvme0n1 00:24:39.624 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:39.883 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:41.787 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:41.787 04:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:42.045 04:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:42.304 04:11:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:43.240 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:43.240 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:43.240 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.240 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:43.498 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.498 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:43.498 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.498 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:43.498 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.498 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:43.498 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.499 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:43.757 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.757 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:43.757 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.757 04:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:44.050 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.050 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:44.050 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.050 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:44.309 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.309 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:44.309 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.309 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:44.309 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.309 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:44.309 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:44.568 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:44.826 04:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:45.761 04:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:45.761 04:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:45.761 04:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.761 04:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:46.024 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.024 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:46.024 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.024 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:46.281 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.281 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:46.281 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.281 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:46.540 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.540 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:46.540 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.540 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:46.799 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.799 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:46.799 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.799 04:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:46.799 04:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.799 04:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:46.799 04:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.799 04:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.058 04:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.058 04:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:47.058 04:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:47.317 04:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:47.575 04:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:48.512 04:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:48.512 04:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:48.512 04:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.512 04:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:48.772 04:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.772 04:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:48.772 04:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.772 04:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:49.031 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.031 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:49.031 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.031 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.031 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.032 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.032 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.032 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.290 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.290 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:49.290 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.290 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.549 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.549 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:49.549 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.549 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.807 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.807 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:49.807 04:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:50.066 04:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:50.066 04:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:51.443 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:51.443 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:51.443 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.443 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.443 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.443 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:51.443 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.443 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.701 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.701 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.701 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.701 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.701 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.701 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.702 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.702 04:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:51.960 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.960 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:51.960 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:51.960 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.218 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.218 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:52.218 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.218 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.477 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.477 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:52.477 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:52.735 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:52.735 04:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:54.111 04:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:54.111 04:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:54.111 04:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.111 04:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.111 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.111 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:54.111 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.111 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.111 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.111 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.111 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.111 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:54.370 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.370 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.370 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.370 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.628 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.628 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:54.628 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.628 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.887 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.887 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:54.887 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.887 04:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.146 04:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:55.146 04:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:55.146 04:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:55.146 04:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:55.404 04:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:56.339 04:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:56.339 04:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:56.339 04:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.339 04:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:56.597 04:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:56.597 04:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:56.597 04:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.597 04:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:56.856 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.856 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:56.856 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:56.856 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.115 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.115 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.115 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.115 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.373 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.373 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:57.373 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.373 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.373 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:57.373 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:57.373 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.373 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:57.631 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.631 04:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:57.890 04:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:57.890 04:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:58.149 04:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:58.407 04:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:59.343 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:59.343 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:59.343 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.343 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:59.602 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.602 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:59.602 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.602 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:59.861 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.861 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:59.861 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.861 04:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:59.861 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.861 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:59.861 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.861 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.120 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.120 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:00.120 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.120 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:00.379 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.379 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:00.379 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.379 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:00.637 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.637 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:00.637 04:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:00.896 04:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:01.155 04:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:02.091 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:02.091 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:02.091 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.091 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.350 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.350 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:02.350 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.350 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.608 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.608 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.608 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.608 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:02.608 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.608 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:02.608 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.608 04:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:02.866 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.866 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:02.866 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.866 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.125 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.125 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:03.125 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.125 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:03.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:03.642 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:03.643 04:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:05.020 04:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:05.020 04:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.020 04:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.020 04:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.020 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.020 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:05.020 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.020 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:05.279 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.279 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.279 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:05.279 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.538 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.538 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:05.538 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.538 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:05.538 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.538 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:05.538 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.538 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:05.797 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.797 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:05.797 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:05.797 04:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.055 04:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.055 04:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:06.055 04:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:06.313 04:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:06.313 04:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:07.691 04:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:07.691 04:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:07.691 04:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.691 04:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:07.691 04:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.691 04:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:07.691 04:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.691 04:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:07.951 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:07.951 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:07.951 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.951 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:07.951 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.951 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:08.210 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.210 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:08.210 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.210 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:08.210 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.210 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.469 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.469 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:08.469 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.469 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 167016 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 167016 ']' 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 167016 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167016 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167016' 00:25:08.728 killing process with pid 167016 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 167016 00:25:08.728 04:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 167016 00:25:08.728 { 00:25:08.728 "results": [ 00:25:08.728 { 00:25:08.728 "job": "Nvme0n1", 00:25:08.728 "core_mask": "0x4", 00:25:08.728 "workload": "verify", 00:25:08.728 "status": "terminated", 00:25:08.728 "verify_range": { 00:25:08.728 "start": 0, 00:25:08.728 "length": 16384 00:25:08.728 }, 00:25:08.728 "queue_depth": 128, 00:25:08.728 "io_size": 4096, 00:25:08.728 "runtime": 28.873848, 00:25:08.728 "iops": 10690.12346397335, 00:25:08.728 "mibps": 41.7582947811459, 00:25:08.728 "io_failed": 0, 00:25:08.728 "io_timeout": 0, 00:25:08.728 "avg_latency_us": 11954.346673380678, 00:25:08.728 "min_latency_us": 477.8666666666667, 00:25:08.728 "max_latency_us": 3083812.083809524 00:25:08.728 } 00:25:08.728 ], 00:25:08.728 "core_count": 1 00:25:08.728 } 00:25:09.011 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 167016 00:25:09.011 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:09.011 [2024-12-10 04:11:37.729630] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:25:09.011 [2024-12-10 04:11:37.729682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167016 ] 00:25:09.011 [2024-12-10 04:11:37.803091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.011 [2024-12-10 04:11:37.843111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.011 Running I/O for 90 seconds... 00:25:09.011 11579.00 IOPS, 45.23 MiB/s [2024-12-10T03:12:08.297Z] 11571.50 IOPS, 45.20 MiB/s [2024-12-10T03:12:08.297Z] 11550.33 IOPS, 45.12 MiB/s [2024-12-10T03:12:08.297Z] 11540.25 IOPS, 45.08 MiB/s [2024-12-10T03:12:08.297Z] 11574.60 IOPS, 45.21 MiB/s [2024-12-10T03:12:08.297Z] 11594.67 IOPS, 45.29 MiB/s [2024-12-10T03:12:08.297Z] 11579.71 IOPS, 45.23 MiB/s [2024-12-10T03:12:08.297Z] 11585.38 IOPS, 45.26 MiB/s [2024-12-10T03:12:08.297Z] 11577.67 IOPS, 45.23 MiB/s [2024-12-10T03:12:08.297Z] 11579.20 IOPS, 45.23 MiB/s [2024-12-10T03:12:08.297Z] 11570.45 IOPS, 45.20 MiB/s [2024-12-10T03:12:08.297Z] 11571.00 IOPS, 45.20 MiB/s [2024-12-10T03:12:08.297Z] [2024-12-10 04:11:51.744222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.744882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.744888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.011 [2024-12-10 04:11:51.745163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.011 [2024-12-10 04:11:51.745178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-12-10 04:11:51.745619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.745913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.012 [2024-12-10 04:11:51.745920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:09.012 [2024-12-10 04:11:51.746263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.746818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.746837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.746862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.746881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.746900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.746919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.746938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.746957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.746976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.746988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.746994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.747006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.747013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.747025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.747032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.747044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.747051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.747063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.747069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.747083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.747089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.747102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.747108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.747120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.013 [2024-12-10 04:11:51.747127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.747139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.747146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.747158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.013 [2024-12-10 04:11:51.747165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:09.013 [2024-12-10 04:11:51.747182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.014 [2024-12-10 04:11:51.747474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.747493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.747992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.014 [2024-12-10 04:11:51.748569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.014 [2024-12-10 04:11:51.748581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.748986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.748993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.749005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.749012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.749023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.749030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.749042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.749049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.760521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.760531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.760545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.760552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.760564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.760571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.760583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.760590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.760602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.760610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.760623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.760630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.760642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.015 [2024-12-10 04:11:51.760649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.760661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.015 [2024-12-10 04:11:51.760667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:09.015 [2024-12-10 04:11:51.760680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.760687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.760698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.760705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.760717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.760724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.760736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.760742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.760754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.760761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.760773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.760780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.760792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.760798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.760810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.760817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.760829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.760837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.760850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.760857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.016 [2024-12-10 04:11:51.761899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.016 [2024-12-10 04:11:51.761925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:09.016 [2024-12-10 04:11:51.761941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.016 [2024-12-10 04:11:51.761950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.761966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.761975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.761991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.017 [2024-12-10 04:11:51.762309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.017 [2024-12-10 04:11:51.762763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.017 [2024-12-10 04:11:51.762788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.017 [2024-12-10 04:11:51.762815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.017 [2024-12-10 04:11:51.762840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.017 [2024-12-10 04:11:51.762865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.017 [2024-12-10 04:11:51.762890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.017 [2024-12-10 04:11:51.762915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.017 [2024-12-10 04:11:51.762941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.017 [2024-12-10 04:11:51.762957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.762966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.762983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.762991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.763008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.763017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.763035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.763044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.763060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.763069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.763085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.763094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.763110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.763119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.763135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.763144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.763160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.763173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.763190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.763199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:09.018 [2024-12-10 04:11:51.764778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.018 [2024-12-10 04:11:51.764789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.764805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.764814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.764830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.764839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.764856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.764865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.764881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.764890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.764906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.764915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.764931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.764940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.764956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.764965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.764982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.764990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.019 [2024-12-10 04:11:51.765016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.765980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.765996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.766006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.766022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.766031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.766048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.766057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.766073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.766082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.766098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.766107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.766123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.766132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.766149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.766158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.766181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.019 [2024-12-10 04:11:51.766190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:09.019 [2024-12-10 04:11:51.766206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.766546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.766920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.766930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.773452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.020 [2024-12-10 04:11:51.773467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.773486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.773498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.773519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.773529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.773548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.773559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.773579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.773590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.773609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.773620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.773640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.773651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.773673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.773684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.773707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.773719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.773742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.773757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:09.020 [2024-12-10 04:11:51.773777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.020 [2024-12-10 04:11:51.773791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.773812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.021 [2024-12-10 04:11:51.773822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.773842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.021 [2024-12-10 04:11:51.773853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.773872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.021 [2024-12-10 04:11:51.773884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.773908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.021 [2024-12-10 04:11:51.773920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.773940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.021 [2024-12-10 04:11:51.773951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.773971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.021 [2024-12-10 04:11:51.773981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.774001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.021 [2024-12-10 04:11:51.774012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.774033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.021 [2024-12-10 04:11:51.774045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.774065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.774076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.774901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.774922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.774945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.774956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.774975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.774986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.021 [2024-12-10 04:11:51.775841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.021 [2024-12-10 04:11:51.775860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.775872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.775891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.775904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.775924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.775937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.775957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.775970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.775991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.022 [2024-12-10 04:11:51.776644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.776883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.776893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.777749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.777768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.777795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.777806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.777825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.777836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.777856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.777866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.777885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.777896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.777916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.777926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:09.022 [2024-12-10 04:11:51.777945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.022 [2024-12-10 04:11:51.777956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.777975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.777986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.778657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.778687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.778717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.778747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.778776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.778806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.778836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.778866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.778898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.778928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.778958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.778978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.778988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.779007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.779018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.779037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.779048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.779067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.779077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.779097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.023 [2024-12-10 04:11:51.779107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.779126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.023 [2024-12-10 04:11:51.779137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:09.023 [2024-12-10 04:11:51.779156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.779674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.024 [2024-12-10 04:11:51.779684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.024 [2024-12-10 04:11:51.780824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.024 [2024-12-10 04:11:51.780843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.780854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.780873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.780883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.780902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.780913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.780932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.780943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.780962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.780972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.780992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.025 [2024-12-10 04:11:51.781940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.025 [2024-12-10 04:11:51.781957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.781967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.781983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.781992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.782019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.782044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.782071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.782097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.026 [2024-12-10 04:11:51.782123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.782149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.782180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.782206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.782234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.782260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.782295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.782312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.782321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.026 [2024-12-10 04:11:51.783739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:09.026 [2024-12-10 04:11:51.783756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.027 [2024-12-10 04:11:51.783765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.783782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.027 [2024-12-10 04:11:51.783791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.783807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.027 [2024-12-10 04:11:51.783816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.783833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.027 [2024-12-10 04:11:51.783842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.783859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.027 [2024-12-10 04:11:51.783868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.783885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.783894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.783911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.783922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.783939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.783948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.783965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.783974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.783991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.027 [2024-12-10 04:11:51.784294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.784728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.784738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.785446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.027 [2024-12-10 04:11:51.785463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.027 [2024-12-10 04:11:51.785483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.027 [2024-12-10 04:11:51.785492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.785977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.785994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.028 [2024-12-10 04:11:51.786340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.028 [2024-12-10 04:11:51.786358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.786964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.786981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.029 [2024-12-10 04:11:51.786990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.787007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.787016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.787034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.787043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.787060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.787070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.787086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.787096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.787113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.787123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.787140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.787149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.787877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.787893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.787913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.787924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.787942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.787953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.787971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.787983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.788001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.788010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.788028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.788037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.788058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.788068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.788086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.788097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.788114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.029 [2024-12-10 04:11:51.788127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:09.029 [2024-12-10 04:11:51.788146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.788766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.788793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.788819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.788845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.788872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.788898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.788925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.788951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.788977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.788997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.789006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.789023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.789033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.789051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.789060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.789078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.789087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.789104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.789114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.789131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.789140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.789158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.789172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.789189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.030 [2024-12-10 04:11:51.789198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:09.030 [2024-12-10 04:11:51.789215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-12-10 04:11:51.789224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.031 [2024-12-10 04:11:51.789683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.789709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.789738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.789765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.789792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.789818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.789846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.789864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.789874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.790974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.790983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.791000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.791009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.791027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.791036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.791053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.031 [2024-12-10 04:11:51.791062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.031 [2024-12-10 04:11:51.791080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.791957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.791974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.032 [2024-12-10 04:11:51.791984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.792001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.792010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.792027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.792036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.792065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.792071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.792083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.792093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.792106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.792112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:09.032 [2024-12-10 04:11:51.792581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.032 [2024-12-10 04:11:51.792593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.792990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.792996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.793015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.793036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.793054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.793074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.793093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.793112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.793132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.793150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.793176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.793194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.033 [2024-12-10 04:11:51.793213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.033 [2024-12-10 04:11:51.793232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.033 [2024-12-10 04:11:51.793251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:09.033 [2024-12-10 04:11:51.793265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.033 [2024-12-10 04:11:51.793272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.034 [2024-12-10 04:11:51.793522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.034 [2024-12-10 04:11:51.793865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.034 [2024-12-10 04:11:51.793883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.034 [2024-12-10 04:11:51.793902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.034 [2024-12-10 04:11:51.793921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.034 [2024-12-10 04:11:51.793940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.793952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.034 [2024-12-10 04:11:51.793960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.794512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.034 [2024-12-10 04:11:51.794525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.794539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.034 [2024-12-10 04:11:51.794546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.794558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.034 [2024-12-10 04:11:51.794565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.034 [2024-12-10 04:11:51.794577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.034 [2024-12-10 04:11:51.794584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.794987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.794998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.035 [2024-12-10 04:11:51.795333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:09.035 [2024-12-10 04:11:51.795345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.795352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.795364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.795371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.795382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.795389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.795401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.795408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.795420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.795426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.795438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.795447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.795459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.795466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.795478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-12-10 04:11:51.795485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.795497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.795505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.795520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.795528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.795541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.795568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.795583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.795593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.036 [2024-12-10 04:11:51.796659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:09.036 [2024-12-10 04:11:51.796672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.037 [2024-12-10 04:11:51.796678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.037 [2024-12-10 04:11:51.796697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.037 [2024-12-10 04:11:51.796716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.037 [2024-12-10 04:11:51.796736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.037 [2024-12-10 04:11:51.796755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.037 [2024-12-10 04:11:51.796774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.037 [2024-12-10 04:11:51.796793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.796811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.796830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.796849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.796868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.796887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.796905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.796924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.796943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.796963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.796983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.796996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.037 [2024-12-10 04:11:51.797097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:09.037 [2024-12-10 04:11:51.797328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-12-10 04:11:51.797335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.797347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-12-10 04:11:51.797354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.797366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-12-10 04:11:51.797373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.797874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-12-10 04:11:51.797886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.797901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-12-10 04:11:51.797908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.797921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-12-10 04:11:51.797931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.797943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-12-10 04:11:51.797950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.797962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.797970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.797982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.797989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.038 [2024-12-10 04:11:51.798711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.038 [2024-12-10 04:11:51.798724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.798981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.798988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.039 [2024-12-10 04:11:51.799183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.039 [2024-12-10 04:11:51.799935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:09.039 [2024-12-10 04:11:51.799947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.799954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.799966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.799972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.799984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.799991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.040 [2024-12-10 04:11:51.800438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.040 [2024-12-10 04:11:51.800456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.040 [2024-12-10 04:11:51.800475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.040 [2024-12-10 04:11:51.800496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.040 [2024-12-10 04:11:51.800514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.040 [2024-12-10 04:11:51.800534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.040 [2024-12-10 04:11:51.800552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.040 [2024-12-10 04:11:51.800571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.040 [2024-12-10 04:11:51.800590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.040 [2024-12-10 04:11:51.800609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:09.040 [2024-12-10 04:11:51.800621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.800746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.800991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.800998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.801507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.801528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.801548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.801567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.041 [2024-12-10 04:11:51.801586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.041 [2024-12-10 04:11:51.801875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.041 [2024-12-10 04:11:51.801882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.801894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.801901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.801913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.801920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.801934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.801941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.801953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.801959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.801971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.801978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.801990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.801997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.042 [2024-12-10 04:11:51.802519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:09.042 [2024-12-10 04:11:51.802531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.802538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.802550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.802557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.802570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.802576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.802588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.802595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.802609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.802616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.802628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.802634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.802648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.802655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.802668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.043 [2024-12-10 04:11:51.802675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:09.043 [2024-12-10 04:11:51.803786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.043 [2024-12-10 04:11:51.803793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.044 [2024-12-10 04:11:51.804086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.044 [2024-12-10 04:11:51.804106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.044 [2024-12-10 04:11:51.804127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.044 [2024-12-10 04:11:51.804146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.044 [2024-12-10 04:11:51.804171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.044 [2024-12-10 04:11:51.804190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.044 [2024-12-10 04:11:51.804500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.804518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.804531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:09.044 [2024-12-10 04:11:51.808365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.044 [2024-12-10 04:11:51.808371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.045 [2024-12-10 04:11:51.808392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.808988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.808995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.045 [2024-12-10 04:11:51.809354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.045 [2024-12-10 04:11:51.809369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.046 [2024-12-10 04:11:51.809747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.809981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.809988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.810003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.810010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.810025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.810031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.810046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.810053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.810068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.810075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.810089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.810096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.810111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.046 [2024-12-10 04:11:51.810119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:09.046 [2024-12-10 04:11:51.810135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:11:51.810563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:11:51.810571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:09.047 11314.00 IOPS, 44.20 MiB/s [2024-12-10T03:12:08.333Z] 10505.86 IOPS, 41.04 MiB/s [2024-12-10T03:12:08.333Z] 9805.47 IOPS, 38.30 MiB/s [2024-12-10T03:12:08.333Z] 9348.88 IOPS, 36.52 MiB/s [2024-12-10T03:12:08.333Z] 9483.59 IOPS, 37.05 MiB/s [2024-12-10T03:12:08.333Z] 9602.89 IOPS, 37.51 MiB/s [2024-12-10T03:12:08.333Z] 9777.68 IOPS, 38.19 MiB/s [2024-12-10T03:12:08.333Z] 9971.30 IOPS, 38.95 MiB/s [2024-12-10T03:12:08.333Z] 10137.90 IOPS, 39.60 MiB/s [2024-12-10T03:12:08.333Z] 10200.45 IOPS, 39.85 MiB/s [2024-12-10T03:12:08.333Z] 10255.78 IOPS, 40.06 MiB/s [2024-12-10T03:12:08.333Z] 10311.62 IOPS, 40.28 MiB/s [2024-12-10T03:12:08.333Z] 10435.60 IOPS, 40.76 MiB/s [2024-12-10T03:12:08.333Z] 10554.15 IOPS, 41.23 MiB/s [2024-12-10T03:12:08.333Z] [2024-12-10 04:12:05.562712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.047 [2024-12-10 04:12:05.562753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.562803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.562812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.562826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.562833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.562846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.562853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.562866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.562872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.562885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.047 [2024-12-10 04:12:05.562892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.562905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.047 [2024-12-10 04:12:05.562912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.562924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.047 [2024-12-10 04:12:05.562940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.562953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.047 [2024-12-10 04:12:05.562960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.562973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.047 [2024-12-10 04:12:05.562980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.562993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.047 [2024-12-10 04:12:05.563019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:09.047 [2024-12-10 04:12:05.563738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.047 [2024-12-10 04:12:05.563745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.563993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.563999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.564018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.564135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.048 [2024-12-10 04:12:05.564968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.564980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.564988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.565001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.565008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.565021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.565028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.565040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.565049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.565061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.565069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.565081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.565088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.565103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.565109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.565122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.565129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.565141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.048 [2024-12-10 04:12:05.565149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:09.048 [2024-12-10 04:12:05.565161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.049 [2024-12-10 04:12:05.565175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:09.049 [2024-12-10 04:12:05.565188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:09.049 [2024-12-10 04:12:05.565195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:09.049 10624.89 IOPS, 41.50 MiB/s [2024-12-10T03:12:08.335Z] 10663.82 IOPS, 41.66 MiB/s [2024-12-10T03:12:08.335Z] Received shutdown signal, test time was about 28.874506 seconds 00:25:09.049 00:25:09.049 Latency(us) 00:25:09.049 [2024-12-10T03:12:08.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.049 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:09.049 Verification LBA range: start 0x0 length 0x4000 00:25:09.049 Nvme0n1 : 28.87 10690.12 41.76 0.00 0.00 11954.35 477.87 3083812.08 00:25:09.049 [2024-12-10T03:12:08.335Z] =================================================================================================================== 00:25:09.049 [2024-12-10T03:12:08.335Z] Total : 10690.12 41.76 0.00 0.00 11954.35 477.87 3083812.08 00:25:09.049 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.308 rmmod nvme_tcp 00:25:09.308 rmmod nvme_fabrics 00:25:09.308 rmmod nvme_keyring 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 166766 ']' 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 166766 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 166766 ']' 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 166766 00:25:09.308 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:09.309 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.309 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 166766 00:25:09.309 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.309 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.309 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 166766' 00:25:09.309 killing process with pid 166766 00:25:09.309 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 166766 00:25:09.309 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 166766 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.570 04:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.599 04:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.599 00:25:11.599 real 0m40.609s 00:25:11.599 user 1m50.086s 00:25:11.599 sys 0m11.600s 00:25:11.599 04:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.599 04:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:11.599 ************************************ 00:25:11.599 END TEST nvmf_host_multipath_status 00:25:11.599 ************************************ 00:25:11.599 04:12:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:11.599 04:12:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:11.599 04:12:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.599 04:12:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.599 ************************************ 00:25:11.599 START TEST nvmf_discovery_remove_ifc 00:25:11.599 ************************************ 00:25:11.599 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:11.599 * Looking for test storage... 00:25:11.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.599 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:11.599 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:11.599 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.859 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:11.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.860 --rc genhtml_branch_coverage=1 00:25:11.860 --rc genhtml_function_coverage=1 00:25:11.860 --rc genhtml_legend=1 00:25:11.860 --rc geninfo_all_blocks=1 00:25:11.860 --rc geninfo_unexecuted_blocks=1 00:25:11.860 00:25:11.860 ' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:11.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.860 --rc genhtml_branch_coverage=1 00:25:11.860 --rc genhtml_function_coverage=1 00:25:11.860 --rc genhtml_legend=1 00:25:11.860 --rc geninfo_all_blocks=1 00:25:11.860 --rc geninfo_unexecuted_blocks=1 00:25:11.860 00:25:11.860 ' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:11.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.860 --rc genhtml_branch_coverage=1 00:25:11.860 --rc genhtml_function_coverage=1 00:25:11.860 --rc genhtml_legend=1 00:25:11.860 --rc geninfo_all_blocks=1 00:25:11.860 --rc geninfo_unexecuted_blocks=1 00:25:11.860 00:25:11.860 ' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:11.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.860 --rc genhtml_branch_coverage=1 00:25:11.860 --rc genhtml_function_coverage=1 00:25:11.860 --rc genhtml_legend=1 00:25:11.860 --rc geninfo_all_blocks=1 00:25:11.860 --rc geninfo_unexecuted_blocks=1 00:25:11.860 00:25:11.860 ' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.860 04:12:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:18.432 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:18.432 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:18.432 Found net devices under 0000:af:00.0: cvl_0_0 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.432 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:18.433 Found net devices under 0000:af:00.1: cvl_0_1 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:18.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:25:18.433 00:25:18.433 --- 10.0.0.2 ping statistics --- 00:25:18.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.433 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:25:18.433 00:25:18.433 --- 10.0.0.1 ping statistics --- 00:25:18.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.433 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=176099 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 176099 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 176099 ']' 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.433 04:12:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.433 [2024-12-10 04:12:17.024378] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:25:18.433 [2024-12-10 04:12:17.024421] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.433 [2024-12-10 04:12:17.102000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.433 [2024-12-10 04:12:17.140683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.433 [2024-12-10 04:12:17.140716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.433 [2024-12-10 04:12:17.140723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.433 [2024-12-10 04:12:17.140729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.433 [2024-12-10 04:12:17.140734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.433 [2024-12-10 04:12:17.141224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.433 [2024-12-10 04:12:17.284715] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.433 [2024-12-10 04:12:17.292880] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:18.433 null0 00:25:18.433 [2024-12-10 04:12:17.324871] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=176122 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 176122 /tmp/host.sock 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 176122 ']' 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:18.433 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.433 [2024-12-10 04:12:17.393920] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:25:18.433 [2024-12-10 04:12:17.393958] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176122 ] 00:25:18.433 [2024-12-10 04:12:17.467196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.433 [2024-12-10 04:12:17.508384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.433 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.434 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:18.434 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.434 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.434 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.434 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:18.434 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.434 04:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 [2024-12-10 04:12:18.651067] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:19.370 [2024-12-10 04:12:18.651087] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:19.370 [2024-12-10 04:12:18.651101] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:19.629 [2024-12-10 04:12:18.738412] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:19.629 [2024-12-10 04:12:18.799931] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:19.629 [2024-12-10 04:12:18.800718] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1be3b50:1 started. 00:25:19.629 [2024-12-10 04:12:18.802056] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:19.629 [2024-12-10 04:12:18.802097] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:19.629 [2024-12-10 04:12:18.802115] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:19.629 [2024-12-10 04:12:18.802127] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:19.629 [2024-12-10 04:12:18.802144] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.629 [2024-12-10 04:12:18.809901] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1be3b50 was disconnected and freed. delete nvme_qpair. 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:19.629 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:19.888 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:19.888 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.888 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.888 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.888 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.888 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.888 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.888 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.888 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.888 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:19.888 04:12:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:20.823 04:12:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.824 04:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.824 04:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.824 04:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.824 04:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.824 04:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.824 04:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.824 04:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.824 04:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:20.824 04:12:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.201 04:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.201 04:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.201 04:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.201 04:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.201 04:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.201 04:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.201 04:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.201 04:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.201 04:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:22.201 04:12:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:23.137 04:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:23.137 04:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.137 04:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:23.137 04:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.137 04:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:23.137 04:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.137 04:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:23.137 04:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.137 04:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:23.137 04:12:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:24.085 04:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.085 04:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.085 04:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.085 04:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.085 04:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.085 04:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.085 04:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.085 04:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.085 04:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:24.085 04:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.021 04:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.021 04:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.021 04:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.021 04:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.021 04:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.021 04:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.021 04:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.021 04:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.021 [2024-12-10 04:12:24.243544] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:25.021 [2024-12-10 04:12:24.243584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.021 [2024-12-10 04:12:24.243611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.021 [2024-12-10 04:12:24.243621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.021 [2024-12-10 04:12:24.243631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.021 [2024-12-10 04:12:24.243639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.021 [2024-12-10 04:12:24.243646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.021 [2024-12-10 04:12:24.243653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.021 [2024-12-10 04:12:24.243659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.021 [2024-12-10 04:12:24.243666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.022 [2024-12-10 04:12:24.243673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.022 [2024-12-10 04:12:24.243680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0310 is same with the state(6) to be set 00:25:25.022 [2024-12-10 04:12:24.253566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc0310 (9): Bad file descriptor 00:25:25.022 04:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:25.022 04:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.022 [2024-12-10 04:12:24.263603] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:25.022 [2024-12-10 04:12:24.263615] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:25.022 [2024-12-10 04:12:24.263621] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:25.022 [2024-12-10 04:12:24.263625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:25.022 [2024-12-10 04:12:24.263647] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:26.398 04:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.398 04:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.398 04:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.398 04:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.398 04:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.398 04:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.398 04:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.398 [2024-12-10 04:12:25.320265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:26.398 [2024-12-10 04:12:25.320343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc0310 with addr=10.0.0.2, port=4420 00:25:26.398 [2024-12-10 04:12:25.320375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0310 is same with the state(6) to be set 00:25:26.398 [2024-12-10 04:12:25.320427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc0310 (9): Bad file descriptor 00:25:26.398 [2024-12-10 04:12:25.321381] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:26.398 [2024-12-10 04:12:25.321446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:26.398 [2024-12-10 04:12:25.321469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:26.398 [2024-12-10 04:12:25.321503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:26.398 [2024-12-10 04:12:25.321524] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:26.399 [2024-12-10 04:12:25.321540] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:26.399 [2024-12-10 04:12:25.321553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:26.399 [2024-12-10 04:12:25.321574] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:26.399 [2024-12-10 04:12:25.321589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:26.399 04:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.399 04:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:26.399 04:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:27.334 [2024-12-10 04:12:26.324103] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:27.334 [2024-12-10 04:12:26.324122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:27.334 [2024-12-10 04:12:26.324132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:27.334 [2024-12-10 04:12:26.324138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:27.334 [2024-12-10 04:12:26.324144] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:27.334 [2024-12-10 04:12:26.324150] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:27.334 [2024-12-10 04:12:26.324154] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:27.334 [2024-12-10 04:12:26.324158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:27.334 [2024-12-10 04:12:26.324184] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:27.334 [2024-12-10 04:12:26.324204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.334 [2024-12-10 04:12:26.324212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.334 [2024-12-10 04:12:26.324221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.334 [2024-12-10 04:12:26.324228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.334 [2024-12-10 04:12:26.324235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.334 [2024-12-10 04:12:26.324241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.334 [2024-12-10 04:12:26.324247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.334 [2024-12-10 04:12:26.324254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.334 [2024-12-10 04:12:26.324261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.334 [2024-12-10 04:12:26.324267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.334 [2024-12-10 04:12:26.324273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:27.334 [2024-12-10 04:12:26.324569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bafa60 (9): Bad file descriptor 00:25:27.335 [2024-12-10 04:12:26.325580] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:27.335 [2024-12-10 04:12:26.325590] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:27.335 04:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:28.270 04:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.270 04:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.270 04:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.270 04:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.270 04:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.270 04:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.270 04:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.270 04:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.528 04:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:28.528 04:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.095 [2024-12-10 04:12:28.375650] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:29.095 [2024-12-10 04:12:28.375666] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:29.095 [2024-12-10 04:12:28.375677] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:29.354 [2024-12-10 04:12:28.502053] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:29.354 [2024-12-10 04:12:28.556612] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:29.354 [2024-12-10 04:12:28.557216] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1bc2650:1 started. 00:25:29.354 [2024-12-10 04:12:28.558205] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:29.354 [2024-12-10 04:12:28.558234] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:29.354 [2024-12-10 04:12:28.558250] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:29.354 [2024-12-10 04:12:28.558262] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:29.354 [2024-12-10 04:12:28.558268] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.354 [2024-12-10 04:12:28.605748] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1bc2650 was disconnected and freed. delete nvme_qpair. 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 176122 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 176122 ']' 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 176122 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.354 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 176122 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 176122' 00:25:29.613 killing process with pid 176122 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 176122 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 176122 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:29.613 rmmod nvme_tcp 00:25:29.613 rmmod nvme_fabrics 00:25:29.613 rmmod nvme_keyring 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 176099 ']' 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 176099 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 176099 ']' 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 176099 00:25:29.613 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:29.872 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.872 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 176099 00:25:29.872 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:29.872 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:29.872 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 176099' 00:25:29.872 killing process with pid 176099 00:25:29.872 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 176099 00:25:29.872 04:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 176099 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.872 04:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:32.407 00:25:32.407 real 0m20.404s 00:25:32.407 user 0m24.412s 00:25:32.407 sys 0m5.848s 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.407 ************************************ 00:25:32.407 END TEST nvmf_discovery_remove_ifc 00:25:32.407 ************************************ 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.407 ************************************ 00:25:32.407 START TEST nvmf_identify_kernel_target 00:25:32.407 ************************************ 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:32.407 * Looking for test storage... 00:25:32.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:32.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.407 --rc genhtml_branch_coverage=1 00:25:32.407 --rc genhtml_function_coverage=1 00:25:32.407 --rc genhtml_legend=1 00:25:32.407 --rc geninfo_all_blocks=1 00:25:32.407 --rc geninfo_unexecuted_blocks=1 00:25:32.407 00:25:32.407 ' 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:32.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.407 --rc genhtml_branch_coverage=1 00:25:32.407 --rc genhtml_function_coverage=1 00:25:32.407 --rc genhtml_legend=1 00:25:32.407 --rc geninfo_all_blocks=1 00:25:32.407 --rc geninfo_unexecuted_blocks=1 00:25:32.407 00:25:32.407 ' 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:32.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.407 --rc genhtml_branch_coverage=1 00:25:32.407 --rc genhtml_function_coverage=1 00:25:32.407 --rc genhtml_legend=1 00:25:32.407 --rc geninfo_all_blocks=1 00:25:32.407 --rc geninfo_unexecuted_blocks=1 00:25:32.407 00:25:32.407 ' 00:25:32.407 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:32.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.407 --rc genhtml_branch_coverage=1 00:25:32.407 --rc genhtml_function_coverage=1 00:25:32.407 --rc genhtml_legend=1 00:25:32.407 --rc geninfo_all_blocks=1 00:25:32.407 --rc geninfo_unexecuted_blocks=1 00:25:32.408 00:25:32.408 ' 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:32.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:32.408 04:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.978 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:38.979 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:38.979 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:38.979 Found net devices under 0000:af:00.0: cvl_0_0 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:38.979 Found net devices under 0000:af:00.1: cvl_0_1 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:38.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:25:38.979 00:25:38.979 --- 10.0.0.2 ping statistics --- 00:25:38.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.979 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:25:38.979 00:25:38.979 --- 10.0.0.1 ping statistics --- 00:25:38.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.979 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.979 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:38.980 04:12:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:40.885 Waiting for block devices as requested 00:25:40.885 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:41.144 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:41.144 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:41.144 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:41.403 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:41.403 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:41.403 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:41.662 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:41.662 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:41.662 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:41.662 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:41.920 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:41.920 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:41.920 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:42.179 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:42.179 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:42.179 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:42.438 No valid GPT data, bailing 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:42.438 00:25:42.438 Discovery Log Number of Records 2, Generation counter 2 00:25:42.438 =====Discovery Log Entry 0====== 00:25:42.438 trtype: tcp 00:25:42.438 adrfam: ipv4 00:25:42.438 subtype: current discovery subsystem 00:25:42.438 treq: not specified, sq flow control disable supported 00:25:42.438 portid: 1 00:25:42.438 trsvcid: 4420 00:25:42.438 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:42.438 traddr: 10.0.0.1 00:25:42.438 eflags: none 00:25:42.438 sectype: none 00:25:42.438 =====Discovery Log Entry 1====== 00:25:42.438 trtype: tcp 00:25:42.438 adrfam: ipv4 00:25:42.438 subtype: nvme subsystem 00:25:42.438 treq: not specified, sq flow control disable supported 00:25:42.438 portid: 1 00:25:42.438 trsvcid: 4420 00:25:42.438 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:42.438 traddr: 10.0.0.1 00:25:42.438 eflags: none 00:25:42.438 sectype: none 00:25:42.438 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:42.438 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:42.698 ===================================================== 00:25:42.698 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:42.698 ===================================================== 00:25:42.698 Controller Capabilities/Features 00:25:42.698 ================================ 00:25:42.698 Vendor ID: 0000 00:25:42.698 Subsystem Vendor ID: 0000 00:25:42.698 Serial Number: aa2652f85f08dac659e9 00:25:42.698 Model Number: Linux 00:25:42.698 Firmware Version: 6.8.9-20 00:25:42.698 Recommended Arb Burst: 0 00:25:42.698 IEEE OUI Identifier: 00 00 00 00:25:42.698 Multi-path I/O 00:25:42.698 May have multiple subsystem ports: No 00:25:42.698 May have multiple controllers: No 00:25:42.698 Associated with SR-IOV VF: No 00:25:42.698 Max Data Transfer Size: Unlimited 00:25:42.698 Max Number of Namespaces: 0 00:25:42.698 Max Number of I/O Queues: 1024 00:25:42.698 NVMe Specification Version (VS): 1.3 00:25:42.698 NVMe Specification Version (Identify): 1.3 00:25:42.698 Maximum Queue Entries: 1024 00:25:42.698 Contiguous Queues Required: No 00:25:42.698 Arbitration Mechanisms Supported 00:25:42.698 Weighted Round Robin: Not Supported 00:25:42.698 Vendor Specific: Not Supported 00:25:42.698 Reset Timeout: 7500 ms 00:25:42.698 Doorbell Stride: 4 bytes 00:25:42.698 NVM Subsystem Reset: Not Supported 00:25:42.698 Command Sets Supported 00:25:42.698 NVM Command Set: Supported 00:25:42.698 Boot Partition: Not Supported 00:25:42.698 Memory Page Size Minimum: 4096 bytes 00:25:42.698 Memory Page Size Maximum: 4096 bytes 00:25:42.698 Persistent Memory Region: Not Supported 00:25:42.698 Optional Asynchronous Events Supported 00:25:42.698 Namespace Attribute Notices: Not Supported 00:25:42.698 Firmware Activation Notices: Not Supported 00:25:42.698 ANA Change Notices: Not Supported 00:25:42.698 PLE Aggregate Log Change Notices: Not Supported 00:25:42.698 LBA Status Info Alert Notices: Not Supported 00:25:42.698 EGE Aggregate Log Change Notices: Not Supported 00:25:42.698 Normal NVM Subsystem Shutdown event: Not Supported 00:25:42.698 Zone Descriptor Change Notices: Not Supported 00:25:42.698 Discovery Log Change Notices: Supported 00:25:42.698 Controller Attributes 00:25:42.699 128-bit Host Identifier: Not Supported 00:25:42.699 Non-Operational Permissive Mode: Not Supported 00:25:42.699 NVM Sets: Not Supported 00:25:42.699 Read Recovery Levels: Not Supported 00:25:42.699 Endurance Groups: Not Supported 00:25:42.699 Predictable Latency Mode: Not Supported 00:25:42.699 Traffic Based Keep ALive: Not Supported 00:25:42.699 Namespace Granularity: Not Supported 00:25:42.699 SQ Associations: Not Supported 00:25:42.699 UUID List: Not Supported 00:25:42.699 Multi-Domain Subsystem: Not Supported 00:25:42.699 Fixed Capacity Management: Not Supported 00:25:42.699 Variable Capacity Management: Not Supported 00:25:42.699 Delete Endurance Group: Not Supported 00:25:42.699 Delete NVM Set: Not Supported 00:25:42.699 Extended LBA Formats Supported: Not Supported 00:25:42.699 Flexible Data Placement Supported: Not Supported 00:25:42.699 00:25:42.699 Controller Memory Buffer Support 00:25:42.699 ================================ 00:25:42.699 Supported: No 00:25:42.699 00:25:42.699 Persistent Memory Region Support 00:25:42.699 ================================ 00:25:42.699 Supported: No 00:25:42.699 00:25:42.699 Admin Command Set Attributes 00:25:42.699 ============================ 00:25:42.699 Security Send/Receive: Not Supported 00:25:42.699 Format NVM: Not Supported 00:25:42.699 Firmware Activate/Download: Not Supported 00:25:42.699 Namespace Management: Not Supported 00:25:42.699 Device Self-Test: Not Supported 00:25:42.699 Directives: Not Supported 00:25:42.699 NVMe-MI: Not Supported 00:25:42.699 Virtualization Management: Not Supported 00:25:42.699 Doorbell Buffer Config: Not Supported 00:25:42.699 Get LBA Status Capability: Not Supported 00:25:42.699 Command & Feature Lockdown Capability: Not Supported 00:25:42.699 Abort Command Limit: 1 00:25:42.699 Async Event Request Limit: 1 00:25:42.699 Number of Firmware Slots: N/A 00:25:42.699 Firmware Slot 1 Read-Only: N/A 00:25:42.699 Firmware Activation Without Reset: N/A 00:25:42.699 Multiple Update Detection Support: N/A 00:25:42.699 Firmware Update Granularity: No Information Provided 00:25:42.699 Per-Namespace SMART Log: No 00:25:42.699 Asymmetric Namespace Access Log Page: Not Supported 00:25:42.699 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:42.699 Command Effects Log Page: Not Supported 00:25:42.699 Get Log Page Extended Data: Supported 00:25:42.699 Telemetry Log Pages: Not Supported 00:25:42.699 Persistent Event Log Pages: Not Supported 00:25:42.699 Supported Log Pages Log Page: May Support 00:25:42.699 Commands Supported & Effects Log Page: Not Supported 00:25:42.699 Feature Identifiers & Effects Log Page:May Support 00:25:42.699 NVMe-MI Commands & Effects Log Page: May Support 00:25:42.699 Data Area 4 for Telemetry Log: Not Supported 00:25:42.699 Error Log Page Entries Supported: 1 00:25:42.699 Keep Alive: Not Supported 00:25:42.699 00:25:42.699 NVM Command Set Attributes 00:25:42.699 ========================== 00:25:42.699 Submission Queue Entry Size 00:25:42.699 Max: 1 00:25:42.699 Min: 1 00:25:42.699 Completion Queue Entry Size 00:25:42.699 Max: 1 00:25:42.699 Min: 1 00:25:42.699 Number of Namespaces: 0 00:25:42.699 Compare Command: Not Supported 00:25:42.699 Write Uncorrectable Command: Not Supported 00:25:42.699 Dataset Management Command: Not Supported 00:25:42.699 Write Zeroes Command: Not Supported 00:25:42.699 Set Features Save Field: Not Supported 00:25:42.699 Reservations: Not Supported 00:25:42.699 Timestamp: Not Supported 00:25:42.699 Copy: Not Supported 00:25:42.699 Volatile Write Cache: Not Present 00:25:42.699 Atomic Write Unit (Normal): 1 00:25:42.699 Atomic Write Unit (PFail): 1 00:25:42.699 Atomic Compare & Write Unit: 1 00:25:42.699 Fused Compare & Write: Not Supported 00:25:42.699 Scatter-Gather List 00:25:42.699 SGL Command Set: Supported 00:25:42.699 SGL Keyed: Not Supported 00:25:42.699 SGL Bit Bucket Descriptor: Not Supported 00:25:42.699 SGL Metadata Pointer: Not Supported 00:25:42.699 Oversized SGL: Not Supported 00:25:42.699 SGL Metadata Address: Not Supported 00:25:42.699 SGL Offset: Supported 00:25:42.699 Transport SGL Data Block: Not Supported 00:25:42.699 Replay Protected Memory Block: Not Supported 00:25:42.699 00:25:42.699 Firmware Slot Information 00:25:42.699 ========================= 00:25:42.699 Active slot: 0 00:25:42.699 00:25:42.699 00:25:42.699 Error Log 00:25:42.699 ========= 00:25:42.699 00:25:42.699 Active Namespaces 00:25:42.699 ================= 00:25:42.699 Discovery Log Page 00:25:42.699 ================== 00:25:42.699 Generation Counter: 2 00:25:42.699 Number of Records: 2 00:25:42.699 Record Format: 0 00:25:42.699 00:25:42.699 Discovery Log Entry 0 00:25:42.699 ---------------------- 00:25:42.699 Transport Type: 3 (TCP) 00:25:42.699 Address Family: 1 (IPv4) 00:25:42.699 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:42.699 Entry Flags: 00:25:42.699 Duplicate Returned Information: 0 00:25:42.699 Explicit Persistent Connection Support for Discovery: 0 00:25:42.699 Transport Requirements: 00:25:42.699 Secure Channel: Not Specified 00:25:42.699 Port ID: 1 (0x0001) 00:25:42.699 Controller ID: 65535 (0xffff) 00:25:42.699 Admin Max SQ Size: 32 00:25:42.699 Transport Service Identifier: 4420 00:25:42.699 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:42.699 Transport Address: 10.0.0.1 00:25:42.699 Discovery Log Entry 1 00:25:42.699 ---------------------- 00:25:42.699 Transport Type: 3 (TCP) 00:25:42.699 Address Family: 1 (IPv4) 00:25:42.699 Subsystem Type: 2 (NVM Subsystem) 00:25:42.699 Entry Flags: 00:25:42.699 Duplicate Returned Information: 0 00:25:42.699 Explicit Persistent Connection Support for Discovery: 0 00:25:42.699 Transport Requirements: 00:25:42.699 Secure Channel: Not Specified 00:25:42.699 Port ID: 1 (0x0001) 00:25:42.699 Controller ID: 65535 (0xffff) 00:25:42.699 Admin Max SQ Size: 32 00:25:42.699 Transport Service Identifier: 4420 00:25:42.699 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:42.699 Transport Address: 10.0.0.1 00:25:42.699 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:42.699 get_feature(0x01) failed 00:25:42.699 get_feature(0x02) failed 00:25:42.699 get_feature(0x04) failed 00:25:42.699 ===================================================== 00:25:42.699 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:42.699 ===================================================== 00:25:42.699 Controller Capabilities/Features 00:25:42.699 ================================ 00:25:42.699 Vendor ID: 0000 00:25:42.699 Subsystem Vendor ID: 0000 00:25:42.699 Serial Number: f73f6e21a67f195e811a 00:25:42.699 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:42.699 Firmware Version: 6.8.9-20 00:25:42.699 Recommended Arb Burst: 6 00:25:42.699 IEEE OUI Identifier: 00 00 00 00:25:42.699 Multi-path I/O 00:25:42.699 May have multiple subsystem ports: Yes 00:25:42.699 May have multiple controllers: Yes 00:25:42.699 Associated with SR-IOV VF: No 00:25:42.699 Max Data Transfer Size: Unlimited 00:25:42.699 Max Number of Namespaces: 1024 00:25:42.699 Max Number of I/O Queues: 128 00:25:42.699 NVMe Specification Version (VS): 1.3 00:25:42.699 NVMe Specification Version (Identify): 1.3 00:25:42.699 Maximum Queue Entries: 1024 00:25:42.699 Contiguous Queues Required: No 00:25:42.699 Arbitration Mechanisms Supported 00:25:42.699 Weighted Round Robin: Not Supported 00:25:42.699 Vendor Specific: Not Supported 00:25:42.699 Reset Timeout: 7500 ms 00:25:42.699 Doorbell Stride: 4 bytes 00:25:42.699 NVM Subsystem Reset: Not Supported 00:25:42.699 Command Sets Supported 00:25:42.699 NVM Command Set: Supported 00:25:42.699 Boot Partition: Not Supported 00:25:42.699 Memory Page Size Minimum: 4096 bytes 00:25:42.699 Memory Page Size Maximum: 4096 bytes 00:25:42.699 Persistent Memory Region: Not Supported 00:25:42.699 Optional Asynchronous Events Supported 00:25:42.699 Namespace Attribute Notices: Supported 00:25:42.699 Firmware Activation Notices: Not Supported 00:25:42.699 ANA Change Notices: Supported 00:25:42.699 PLE Aggregate Log Change Notices: Not Supported 00:25:42.699 LBA Status Info Alert Notices: Not Supported 00:25:42.699 EGE Aggregate Log Change Notices: Not Supported 00:25:42.699 Normal NVM Subsystem Shutdown event: Not Supported 00:25:42.699 Zone Descriptor Change Notices: Not Supported 00:25:42.699 Discovery Log Change Notices: Not Supported 00:25:42.699 Controller Attributes 00:25:42.699 128-bit Host Identifier: Supported 00:25:42.699 Non-Operational Permissive Mode: Not Supported 00:25:42.699 NVM Sets: Not Supported 00:25:42.700 Read Recovery Levels: Not Supported 00:25:42.700 Endurance Groups: Not Supported 00:25:42.700 Predictable Latency Mode: Not Supported 00:25:42.700 Traffic Based Keep ALive: Supported 00:25:42.700 Namespace Granularity: Not Supported 00:25:42.700 SQ Associations: Not Supported 00:25:42.700 UUID List: Not Supported 00:25:42.700 Multi-Domain Subsystem: Not Supported 00:25:42.700 Fixed Capacity Management: Not Supported 00:25:42.700 Variable Capacity Management: Not Supported 00:25:42.700 Delete Endurance Group: Not Supported 00:25:42.700 Delete NVM Set: Not Supported 00:25:42.700 Extended LBA Formats Supported: Not Supported 00:25:42.700 Flexible Data Placement Supported: Not Supported 00:25:42.700 00:25:42.700 Controller Memory Buffer Support 00:25:42.700 ================================ 00:25:42.700 Supported: No 00:25:42.700 00:25:42.700 Persistent Memory Region Support 00:25:42.700 ================================ 00:25:42.700 Supported: No 00:25:42.700 00:25:42.700 Admin Command Set Attributes 00:25:42.700 ============================ 00:25:42.700 Security Send/Receive: Not Supported 00:25:42.700 Format NVM: Not Supported 00:25:42.700 Firmware Activate/Download: Not Supported 00:25:42.700 Namespace Management: Not Supported 00:25:42.700 Device Self-Test: Not Supported 00:25:42.700 Directives: Not Supported 00:25:42.700 NVMe-MI: Not Supported 00:25:42.700 Virtualization Management: Not Supported 00:25:42.700 Doorbell Buffer Config: Not Supported 00:25:42.700 Get LBA Status Capability: Not Supported 00:25:42.700 Command & Feature Lockdown Capability: Not Supported 00:25:42.700 Abort Command Limit: 4 00:25:42.700 Async Event Request Limit: 4 00:25:42.700 Number of Firmware Slots: N/A 00:25:42.700 Firmware Slot 1 Read-Only: N/A 00:25:42.700 Firmware Activation Without Reset: N/A 00:25:42.700 Multiple Update Detection Support: N/A 00:25:42.700 Firmware Update Granularity: No Information Provided 00:25:42.700 Per-Namespace SMART Log: Yes 00:25:42.700 Asymmetric Namespace Access Log Page: Supported 00:25:42.700 ANA Transition Time : 10 sec 00:25:42.700 00:25:42.700 Asymmetric Namespace Access Capabilities 00:25:42.700 ANA Optimized State : Supported 00:25:42.700 ANA Non-Optimized State : Supported 00:25:42.700 ANA Inaccessible State : Supported 00:25:42.700 ANA Persistent Loss State : Supported 00:25:42.700 ANA Change State : Supported 00:25:42.700 ANAGRPID is not changed : No 00:25:42.700 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:42.700 00:25:42.700 ANA Group Identifier Maximum : 128 00:25:42.700 Number of ANA Group Identifiers : 128 00:25:42.700 Max Number of Allowed Namespaces : 1024 00:25:42.700 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:42.700 Command Effects Log Page: Supported 00:25:42.700 Get Log Page Extended Data: Supported 00:25:42.700 Telemetry Log Pages: Not Supported 00:25:42.700 Persistent Event Log Pages: Not Supported 00:25:42.700 Supported Log Pages Log Page: May Support 00:25:42.700 Commands Supported & Effects Log Page: Not Supported 00:25:42.700 Feature Identifiers & Effects Log Page:May Support 00:25:42.700 NVMe-MI Commands & Effects Log Page: May Support 00:25:42.700 Data Area 4 for Telemetry Log: Not Supported 00:25:42.700 Error Log Page Entries Supported: 128 00:25:42.700 Keep Alive: Supported 00:25:42.700 Keep Alive Granularity: 1000 ms 00:25:42.700 00:25:42.700 NVM Command Set Attributes 00:25:42.700 ========================== 00:25:42.700 Submission Queue Entry Size 00:25:42.700 Max: 64 00:25:42.700 Min: 64 00:25:42.700 Completion Queue Entry Size 00:25:42.700 Max: 16 00:25:42.700 Min: 16 00:25:42.700 Number of Namespaces: 1024 00:25:42.700 Compare Command: Not Supported 00:25:42.700 Write Uncorrectable Command: Not Supported 00:25:42.700 Dataset Management Command: Supported 00:25:42.700 Write Zeroes Command: Supported 00:25:42.700 Set Features Save Field: Not Supported 00:25:42.700 Reservations: Not Supported 00:25:42.700 Timestamp: Not Supported 00:25:42.700 Copy: Not Supported 00:25:42.700 Volatile Write Cache: Present 00:25:42.700 Atomic Write Unit (Normal): 1 00:25:42.700 Atomic Write Unit (PFail): 1 00:25:42.700 Atomic Compare & Write Unit: 1 00:25:42.700 Fused Compare & Write: Not Supported 00:25:42.700 Scatter-Gather List 00:25:42.700 SGL Command Set: Supported 00:25:42.700 SGL Keyed: Not Supported 00:25:42.700 SGL Bit Bucket Descriptor: Not Supported 00:25:42.700 SGL Metadata Pointer: Not Supported 00:25:42.700 Oversized SGL: Not Supported 00:25:42.700 SGL Metadata Address: Not Supported 00:25:42.700 SGL Offset: Supported 00:25:42.700 Transport SGL Data Block: Not Supported 00:25:42.700 Replay Protected Memory Block: Not Supported 00:25:42.700 00:25:42.700 Firmware Slot Information 00:25:42.700 ========================= 00:25:42.700 Active slot: 0 00:25:42.700 00:25:42.700 Asymmetric Namespace Access 00:25:42.700 =========================== 00:25:42.700 Change Count : 0 00:25:42.700 Number of ANA Group Descriptors : 1 00:25:42.700 ANA Group Descriptor : 0 00:25:42.700 ANA Group ID : 1 00:25:42.700 Number of NSID Values : 1 00:25:42.700 Change Count : 0 00:25:42.700 ANA State : 1 00:25:42.700 Namespace Identifier : 1 00:25:42.700 00:25:42.700 Commands Supported and Effects 00:25:42.700 ============================== 00:25:42.700 Admin Commands 00:25:42.700 -------------- 00:25:42.700 Get Log Page (02h): Supported 00:25:42.700 Identify (06h): Supported 00:25:42.700 Abort (08h): Supported 00:25:42.700 Set Features (09h): Supported 00:25:42.700 Get Features (0Ah): Supported 00:25:42.700 Asynchronous Event Request (0Ch): Supported 00:25:42.700 Keep Alive (18h): Supported 00:25:42.700 I/O Commands 00:25:42.700 ------------ 00:25:42.700 Flush (00h): Supported 00:25:42.700 Write (01h): Supported LBA-Change 00:25:42.700 Read (02h): Supported 00:25:42.700 Write Zeroes (08h): Supported LBA-Change 00:25:42.700 Dataset Management (09h): Supported 00:25:42.700 00:25:42.700 Error Log 00:25:42.700 ========= 00:25:42.700 Entry: 0 00:25:42.700 Error Count: 0x3 00:25:42.700 Submission Queue Id: 0x0 00:25:42.700 Command Id: 0x5 00:25:42.700 Phase Bit: 0 00:25:42.700 Status Code: 0x2 00:25:42.700 Status Code Type: 0x0 00:25:42.700 Do Not Retry: 1 00:25:42.700 Error Location: 0x28 00:25:42.700 LBA: 0x0 00:25:42.700 Namespace: 0x0 00:25:42.700 Vendor Log Page: 0x0 00:25:42.700 ----------- 00:25:42.700 Entry: 1 00:25:42.700 Error Count: 0x2 00:25:42.700 Submission Queue Id: 0x0 00:25:42.700 Command Id: 0x5 00:25:42.700 Phase Bit: 0 00:25:42.700 Status Code: 0x2 00:25:42.700 Status Code Type: 0x0 00:25:42.700 Do Not Retry: 1 00:25:42.700 Error Location: 0x28 00:25:42.700 LBA: 0x0 00:25:42.700 Namespace: 0x0 00:25:42.700 Vendor Log Page: 0x0 00:25:42.700 ----------- 00:25:42.700 Entry: 2 00:25:42.700 Error Count: 0x1 00:25:42.700 Submission Queue Id: 0x0 00:25:42.700 Command Id: 0x4 00:25:42.700 Phase Bit: 0 00:25:42.700 Status Code: 0x2 00:25:42.700 Status Code Type: 0x0 00:25:42.700 Do Not Retry: 1 00:25:42.700 Error Location: 0x28 00:25:42.700 LBA: 0x0 00:25:42.700 Namespace: 0x0 00:25:42.700 Vendor Log Page: 0x0 00:25:42.700 00:25:42.700 Number of Queues 00:25:42.700 ================ 00:25:42.700 Number of I/O Submission Queues: 128 00:25:42.700 Number of I/O Completion Queues: 128 00:25:42.700 00:25:42.700 ZNS Specific Controller Data 00:25:42.700 ============================ 00:25:42.700 Zone Append Size Limit: 0 00:25:42.700 00:25:42.700 00:25:42.700 Active Namespaces 00:25:42.700 ================= 00:25:42.700 get_feature(0x05) failed 00:25:42.700 Namespace ID:1 00:25:42.700 Command Set Identifier: NVM (00h) 00:25:42.700 Deallocate: Supported 00:25:42.700 Deallocated/Unwritten Error: Not Supported 00:25:42.700 Deallocated Read Value: Unknown 00:25:42.700 Deallocate in Write Zeroes: Not Supported 00:25:42.700 Deallocated Guard Field: 0xFFFF 00:25:42.700 Flush: Supported 00:25:42.700 Reservation: Not Supported 00:25:42.700 Namespace Sharing Capabilities: Multiple Controllers 00:25:42.700 Size (in LBAs): 1953525168 (931GiB) 00:25:42.700 Capacity (in LBAs): 1953525168 (931GiB) 00:25:42.700 Utilization (in LBAs): 1953525168 (931GiB) 00:25:42.700 UUID: 0e8c6723-201a-4c1b-8915-7268ed0e34e7 00:25:42.700 Thin Provisioning: Not Supported 00:25:42.700 Per-NS Atomic Units: Yes 00:25:42.700 Atomic Boundary Size (Normal): 0 00:25:42.700 Atomic Boundary Size (PFail): 0 00:25:42.700 Atomic Boundary Offset: 0 00:25:42.700 NGUID/EUI64 Never Reused: No 00:25:42.700 ANA group ID: 1 00:25:42.700 Namespace Write Protected: No 00:25:42.700 Number of LBA Formats: 1 00:25:42.700 Current LBA Format: LBA Format #00 00:25:42.700 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:42.700 00:25:42.700 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:42.701 rmmod nvme_tcp 00:25:42.701 rmmod nvme_fabrics 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.701 04:12:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.235 04:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:45.235 04:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:45.235 04:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:45.235 04:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:45.235 04:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:45.235 04:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:45.235 04:12:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:45.235 04:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:45.235 04:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:45.235 04:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:45.235 04:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:47.770 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.770 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:48.707 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:48.707 00:25:48.707 real 0m16.673s 00:25:48.707 user 0m4.353s 00:25:48.707 sys 0m8.671s 00:25:48.707 04:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.707 04:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.707 ************************************ 00:25:48.707 END TEST nvmf_identify_kernel_target 00:25:48.707 ************************************ 00:25:48.707 04:12:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:48.707 04:12:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:48.707 04:12:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:48.707 04:12:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.966 ************************************ 00:25:48.966 START TEST nvmf_auth_host 00:25:48.966 ************************************ 00:25:48.966 04:12:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:48.966 * Looking for test storage... 00:25:48.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.966 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:48.966 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:48.966 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:48.966 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:48.966 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.966 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:48.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.967 --rc genhtml_branch_coverage=1 00:25:48.967 --rc genhtml_function_coverage=1 00:25:48.967 --rc genhtml_legend=1 00:25:48.967 --rc geninfo_all_blocks=1 00:25:48.967 --rc geninfo_unexecuted_blocks=1 00:25:48.967 00:25:48.967 ' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:48.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.967 --rc genhtml_branch_coverage=1 00:25:48.967 --rc genhtml_function_coverage=1 00:25:48.967 --rc genhtml_legend=1 00:25:48.967 --rc geninfo_all_blocks=1 00:25:48.967 --rc geninfo_unexecuted_blocks=1 00:25:48.967 00:25:48.967 ' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:48.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.967 --rc genhtml_branch_coverage=1 00:25:48.967 --rc genhtml_function_coverage=1 00:25:48.967 --rc genhtml_legend=1 00:25:48.967 --rc geninfo_all_blocks=1 00:25:48.967 --rc geninfo_unexecuted_blocks=1 00:25:48.967 00:25:48.967 ' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:48.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.967 --rc genhtml_branch_coverage=1 00:25:48.967 --rc genhtml_function_coverage=1 00:25:48.967 --rc genhtml_legend=1 00:25:48.967 --rc geninfo_all_blocks=1 00:25:48.967 --rc geninfo_unexecuted_blocks=1 00:25:48.967 00:25:48.967 ' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:48.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.967 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.968 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:48.968 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:48.968 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:48.968 04:12:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:55.539 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:55.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:55.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:55.540 Found net devices under 0000:af:00.0: cvl_0_0 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:55.540 Found net devices under 0000:af:00.1: cvl_0_1 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:55.540 04:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:55.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:25:55.540 00:25:55.540 --- 10.0.0.2 ping statistics --- 00:25:55.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.540 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:25:55.540 00:25:55.540 --- 10.0.0.1 ping statistics --- 00:25:55.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.540 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=187892 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 187892 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 187892 ']' 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:55.540 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=47d5d123da6ef90810a7319fd53212aa 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zDt 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 47d5d123da6ef90810a7319fd53212aa 0 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 47d5d123da6ef90810a7319fd53212aa 0 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=47d5d123da6ef90810a7319fd53212aa 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zDt 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zDt 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zDt 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1d2582995ca7a2633f8d1ae36281201b6daab491f930778a2becb9bf7b85e5b8 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oLX 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1d2582995ca7a2633f8d1ae36281201b6daab491f930778a2becb9bf7b85e5b8 3 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1d2582995ca7a2633f8d1ae36281201b6daab491f930778a2becb9bf7b85e5b8 3 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1d2582995ca7a2633f8d1ae36281201b6daab491f930778a2becb9bf7b85e5b8 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oLX 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oLX 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.oLX 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ab89a5389d5a4a25a4f747d93f6ffccd0c427fd24aba9c8a 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.t6i 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ab89a5389d5a4a25a4f747d93f6ffccd0c427fd24aba9c8a 0 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ab89a5389d5a4a25a4f747d93f6ffccd0c427fd24aba9c8a 0 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ab89a5389d5a4a25a4f747d93f6ffccd0c427fd24aba9c8a 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.t6i 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.t6i 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.t6i 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=261ac052a69cfb25c64f63108656c058b69312460e5a9deb 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9d4 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 261ac052a69cfb25c64f63108656c058b69312460e5a9deb 2 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 261ac052a69cfb25c64f63108656c058b69312460e5a9deb 2 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=261ac052a69cfb25c64f63108656c058b69312460e5a9deb 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9d4 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9d4 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.9d4 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=35cfb4f081ebfa230ec2d55ea46eeaa3 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.yuY 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 35cfb4f081ebfa230ec2d55ea46eeaa3 1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 35cfb4f081ebfa230ec2d55ea46eeaa3 1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=35cfb4f081ebfa230ec2d55ea46eeaa3 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.yuY 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.yuY 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.yuY 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ea207926bba6aef6bdc8b026726b1016 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Gxu 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ea207926bba6aef6bdc8b026726b1016 1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ea207926bba6aef6bdc8b026726b1016 1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ea207926bba6aef6bdc8b026726b1016 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Gxu 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Gxu 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Gxu 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:55.541 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a055dfdfb980c06ce1aa613e3d0114e7a67c74510f3437b7 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.nLk 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a055dfdfb980c06ce1aa613e3d0114e7a67c74510f3437b7 2 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a055dfdfb980c06ce1aa613e3d0114e7a67c74510f3437b7 2 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a055dfdfb980c06ce1aa613e3d0114e7a67c74510f3437b7 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:55.542 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.nLk 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.nLk 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.nLk 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7e33c0c4804c521f7d46548214144596 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.A35 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7e33c0c4804c521f7d46548214144596 0 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7e33c0c4804c521f7d46548214144596 0 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7e33c0c4804c521f7d46548214144596 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.A35 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.A35 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.A35 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=074645f51562373086f7ce539c800903f05f73ef55ea5d98c60126003f614dc8 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mNe 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 074645f51562373086f7ce539c800903f05f73ef55ea5d98c60126003f614dc8 3 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 074645f51562373086f7ce539c800903f05f73ef55ea5d98c60126003f614dc8 3 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=074645f51562373086f7ce539c800903f05f73ef55ea5d98c60126003f614dc8 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mNe 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mNe 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mNe 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 187892 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 187892 ']' 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:55.801 04:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zDt 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.oLX ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oLX 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.t6i 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.9d4 ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9d4 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.yuY 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Gxu ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gxu 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.nLk 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.A35 ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.A35 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mNe 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.060 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:56.061 04:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:58.592 Waiting for block devices as requested 00:25:58.851 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:58.851 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:58.851 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:59.109 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:59.109 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:59.109 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:59.109 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:59.368 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:59.368 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:59.368 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:59.368 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:59.627 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:59.627 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:59.627 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:59.886 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:59.886 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:59.886 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:00.453 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:00.453 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:00.453 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:00.453 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:00.453 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:00.453 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:00.453 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:00.453 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:00.453 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:00.453 No valid GPT data, bailing 00:26:00.453 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:00.712 00:26:00.712 Discovery Log Number of Records 2, Generation counter 2 00:26:00.712 =====Discovery Log Entry 0====== 00:26:00.712 trtype: tcp 00:26:00.712 adrfam: ipv4 00:26:00.712 subtype: current discovery subsystem 00:26:00.712 treq: not specified, sq flow control disable supported 00:26:00.712 portid: 1 00:26:00.712 trsvcid: 4420 00:26:00.712 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:00.712 traddr: 10.0.0.1 00:26:00.712 eflags: none 00:26:00.712 sectype: none 00:26:00.712 =====Discovery Log Entry 1====== 00:26:00.712 trtype: tcp 00:26:00.712 adrfam: ipv4 00:26:00.712 subtype: nvme subsystem 00:26:00.712 treq: not specified, sq flow control disable supported 00:26:00.712 portid: 1 00:26:00.712 trsvcid: 4420 00:26:00.712 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:00.712 traddr: 10.0.0.1 00:26:00.712 eflags: none 00:26:00.712 sectype: none 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:00.712 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.713 04:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.972 nvme0n1 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.972 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.231 nvme0n1 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.231 nvme0n1 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.231 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.490 nvme0n1 00:26:01.490 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.491 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.491 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.491 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.491 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.491 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.491 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.491 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.491 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.491 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.749 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.750 nvme0n1 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.750 04:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.750 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.008 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.008 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.008 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.009 nvme0n1 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.009 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.267 nvme0n1 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.267 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.268 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.527 nvme0n1 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.527 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.786 nvme0n1 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.786 04:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.786 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.045 nvme0n1 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.045 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.304 nvme0n1 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.304 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.563 nvme0n1 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.563 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.564 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.564 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.564 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.564 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.564 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.564 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.564 04:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.822 nvme0n1 00:26:03.822 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.822 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.822 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.822 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.822 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.822 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.822 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.822 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.822 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.822 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.080 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.080 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.080 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:04.080 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.081 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.340 nvme0n1 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.340 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.599 nvme0n1 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.599 04:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.858 nvme0n1 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.858 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.859 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.859 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.859 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.859 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.859 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.859 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.859 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.426 nvme0n1 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.426 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.685 nvme0n1 00:26:05.685 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.685 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.685 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.685 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.685 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.685 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.997 04:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.997 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.281 nvme0n1 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.281 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.282 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.282 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.282 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.282 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.282 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.282 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.282 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.282 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.282 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.282 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.282 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.854 nvme0n1 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.854 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.855 04:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.114 nvme0n1 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.114 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.115 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.115 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.115 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.115 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.115 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.115 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.115 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.115 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.115 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.374 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.374 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.374 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.941 nvme0n1 00:26:07.941 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.941 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.941 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.941 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.941 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.941 04:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.941 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.942 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.510 nvme0n1 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.510 04:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.078 nvme0n1 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.078 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.079 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.646 nvme0n1 00:26:09.646 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.646 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.646 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.646 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.646 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.646 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.905 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.906 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.906 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.906 04:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.474 nvme0n1 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.474 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.734 nvme0n1 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.734 04:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.734 nvme0n1 00:26:10.734 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.994 nvme0n1 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.994 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.253 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.254 nvme0n1 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.254 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.513 nvme0n1 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:11.513 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.514 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.773 nvme0n1 00:26:11.773 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.773 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.773 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.773 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.773 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.773 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.773 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.773 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.773 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.773 04:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.773 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.031 nvme0n1 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.031 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.290 nvme0n1 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.290 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.550 nvme0n1 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.550 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.809 nvme0n1 00:26:12.809 04:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.809 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.068 nvme0n1 00:26:13.068 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.068 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.068 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.068 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.068 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.068 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.327 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.586 nvme0n1 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:13.586 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.587 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.846 nvme0n1 00:26:13.846 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.846 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.846 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.846 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.846 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.846 04:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.846 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.105 nvme0n1 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.105 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.364 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.364 nvme0n1 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.623 04:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.882 nvme0n1 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.882 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.141 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.401 nvme0n1 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.401 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.970 nvme0n1 00:26:15.970 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.970 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.970 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.970 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.970 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.970 04:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.970 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.229 nvme0n1 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.229 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.797 nvme0n1 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.797 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.798 04:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.366 nvme0n1 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.366 04:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.933 nvme0n1 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.933 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.192 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.193 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.761 nvme0n1 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.761 04:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.329 nvme0n1 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.329 04:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.897 nvme0n1 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.897 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.156 nvme0n1 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.156 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.157 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.157 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.157 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.157 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.157 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.157 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.157 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.157 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.157 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.157 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.157 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.415 nvme0n1 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.416 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.416 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.416 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.416 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.416 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.416 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.416 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.416 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.416 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.416 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.674 nvme0n1 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.674 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.675 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.934 nvme0n1 00:26:20.934 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.934 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.934 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.934 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.934 04:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.934 nvme0n1 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.934 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.193 nvme0n1 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.193 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.452 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.452 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.452 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.452 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.452 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.452 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.452 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.452 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:21.452 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.452 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.453 nvme0n1 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.453 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.712 nvme0n1 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.712 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.971 04:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.971 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.972 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.972 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.972 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.972 nvme0n1 00:26:21.972 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.972 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.972 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.972 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.972 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.231 nvme0n1 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.231 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:22.489 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.490 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.748 nvme0n1 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.748 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:22.749 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.749 04:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.007 nvme0n1 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.007 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.265 nvme0n1 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.265 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.266 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.523 nvme0n1 00:26:23.523 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.523 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.523 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.523 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.523 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.523 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.782 04:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.042 nvme0n1 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.042 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.300 nvme0n1 00:26:24.300 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.300 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.300 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.300 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.300 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.559 04:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.819 nvme0n1 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.819 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.387 nvme0n1 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.387 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.388 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.647 nvme0n1 00:26:25.647 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.647 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.647 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.647 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.647 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.647 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.906 04:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.164 nvme0n1 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDdkNWQxMjNkYTZlZjkwODEwYTczMTlmZDUzMjEyYWGlLZG1: 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: ]] 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyNTgyOTk1Y2E3YTI2MzNmOGQxYWUzNjI4MTIwMWI2ZGFhYjQ5MWY5MzA3NzhhMmJlY2I5YmY3Yjg1ZTViOFWzVug=: 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.164 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.732 nvme0n1 00:26:26.732 04:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.732 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.732 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.732 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.732 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.732 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.991 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.559 nvme0n1 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:27.559 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.560 04:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.127 nvme0n1 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTA1NWRmZGZiOTgwYzA2Y2UxYWE2MTNlM2QwMTE0ZTdhNjdjNzQ1MTBmMzQzN2I3YSCR4A==: 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: ]] 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2UzM2MwYzQ4MDRjNTIxZjdkNDY1NDgyMTQxNDQ1OTZ9PIvL: 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.127 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.128 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.695 nvme0n1 00:26:28.695 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.695 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.695 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.695 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.695 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.695 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.695 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDc0NjQ1ZjUxNTYyMzczMDg2ZjdjZTUzOWM4MDA5MDNmMDVmNzNlZjU1ZWE1ZDk4YzYwMTI2MDAzZjYxNGRjOK6sHYY=: 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:28.954 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.955 04:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.955 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.522 nvme0n1 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.522 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.523 request: 00:26:29.523 { 00:26:29.523 "name": "nvme0", 00:26:29.523 "trtype": "tcp", 00:26:29.523 "traddr": "10.0.0.1", 00:26:29.523 "adrfam": "ipv4", 00:26:29.523 "trsvcid": "4420", 00:26:29.523 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:29.523 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:29.523 "prchk_reftag": false, 00:26:29.523 "prchk_guard": false, 00:26:29.523 "hdgst": false, 00:26:29.523 "ddgst": false, 00:26:29.523 "allow_unrecognized_csi": false, 00:26:29.523 "method": "bdev_nvme_attach_controller", 00:26:29.523 "req_id": 1 00:26:29.523 } 00:26:29.523 Got JSON-RPC error response 00:26:29.523 response: 00:26:29.523 { 00:26:29.523 "code": -5, 00:26:29.523 "message": "Input/output error" 00:26:29.523 } 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.523 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.783 request: 00:26:29.783 { 00:26:29.783 "name": "nvme0", 00:26:29.783 "trtype": "tcp", 00:26:29.783 "traddr": "10.0.0.1", 00:26:29.783 "adrfam": "ipv4", 00:26:29.783 "trsvcid": "4420", 00:26:29.783 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:29.783 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:29.783 "prchk_reftag": false, 00:26:29.783 "prchk_guard": false, 00:26:29.783 "hdgst": false, 00:26:29.783 "ddgst": false, 00:26:29.783 "dhchap_key": "key2", 00:26:29.783 "allow_unrecognized_csi": false, 00:26:29.783 "method": "bdev_nvme_attach_controller", 00:26:29.783 "req_id": 1 00:26:29.783 } 00:26:29.783 Got JSON-RPC error response 00:26:29.783 response: 00:26:29.783 { 00:26:29.783 "code": -5, 00:26:29.783 "message": "Input/output error" 00:26:29.783 } 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.783 request: 00:26:29.783 { 00:26:29.783 "name": "nvme0", 00:26:29.783 "trtype": "tcp", 00:26:29.783 "traddr": "10.0.0.1", 00:26:29.783 "adrfam": "ipv4", 00:26:29.783 "trsvcid": "4420", 00:26:29.783 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:29.783 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:29.783 "prchk_reftag": false, 00:26:29.783 "prchk_guard": false, 00:26:29.783 "hdgst": false, 00:26:29.783 "ddgst": false, 00:26:29.783 "dhchap_key": "key1", 00:26:29.783 "dhchap_ctrlr_key": "ckey2", 00:26:29.783 "allow_unrecognized_csi": false, 00:26:29.783 "method": "bdev_nvme_attach_controller", 00:26:29.783 "req_id": 1 00:26:29.783 } 00:26:29.783 Got JSON-RPC error response 00:26:29.783 response: 00:26:29.783 { 00:26:29.783 "code": -5, 00:26:29.783 "message": "Input/output error" 00:26:29.783 } 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.783 04:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.042 nvme0n1 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.042 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.043 request: 00:26:30.043 { 00:26:30.043 "name": "nvme0", 00:26:30.043 "dhchap_key": "key1", 00:26:30.043 "dhchap_ctrlr_key": "ckey2", 00:26:30.043 "method": "bdev_nvme_set_keys", 00:26:30.043 "req_id": 1 00:26:30.043 } 00:26:30.043 Got JSON-RPC error response 00:26:30.043 response: 00:26:30.043 { 00:26:30.043 "code": -13, 00:26:30.043 "message": "Permission denied" 00:26:30.043 } 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.043 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.301 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:30.301 04:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:31.238 04:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.238 04:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:31.238 04:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.238 04:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.238 04:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.238 04:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:31.238 04:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWI4OWE1Mzg5ZDVhNGEyNWE0Zjc0N2Q5M2Y2ZmZjY2QwYzQyN2ZkMjRhYmE5YzhhKqwEyA==: 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: ]] 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjYxYWMwNTJhNjljZmIyNWM2NGY2MzEwODY1NmMwNThiNjkzMTI0NjBlNWE5ZGViwY426g==: 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.174 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.433 nvme0n1 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzVjZmI0ZjA4MWViZmEyMzBlYzJkNTVlYTQ2ZWVhYTOdraoW: 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: ]] 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWEyMDc5MjZiYmE2YWVmNmJkYzhiMDI2NzI2YjEwMTYbMnej: 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.433 request: 00:26:32.433 { 00:26:32.433 "name": "nvme0", 00:26:32.433 "dhchap_key": "key2", 00:26:32.433 "dhchap_ctrlr_key": "ckey1", 00:26:32.433 "method": "bdev_nvme_set_keys", 00:26:32.433 "req_id": 1 00:26:32.433 } 00:26:32.433 Got JSON-RPC error response 00:26:32.433 response: 00:26:32.433 { 00:26:32.433 "code": -13, 00:26:32.433 "message": "Permission denied" 00:26:32.433 } 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:32.433 04:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.811 rmmod nvme_tcp 00:26:33.811 rmmod nvme_fabrics 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 187892 ']' 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 187892 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 187892 ']' 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 187892 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 187892 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 187892' 00:26:33.811 killing process with pid 187892 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 187892 00:26:33.811 04:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 187892 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.811 04:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:36.347 04:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:38.882 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:38.882 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:39.819 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:39.819 04:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zDt /tmp/spdk.key-null.t6i /tmp/spdk.key-sha256.yuY /tmp/spdk.key-sha384.nLk /tmp/spdk.key-sha512.mNe /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:39.819 04:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:42.355 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:42.356 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:42.615 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:42.615 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:42.615 00:26:42.615 real 0m53.828s 00:26:42.615 user 0m48.486s 00:26:42.615 sys 0m12.688s 00:26:42.615 04:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.615 04:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.615 ************************************ 00:26:42.615 END TEST nvmf_auth_host 00:26:42.615 ************************************ 00:26:42.615 04:13:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:42.615 04:13:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:42.615 04:13:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:42.615 04:13:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.615 04:13:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.615 ************************************ 00:26:42.615 START TEST nvmf_digest 00:26:42.615 ************************************ 00:26:42.615 04:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:42.874 * Looking for test storage... 00:26:42.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:42.874 04:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:42.874 04:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:26:42.874 04:13:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.874 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:42.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.875 --rc genhtml_branch_coverage=1 00:26:42.875 --rc genhtml_function_coverage=1 00:26:42.875 --rc genhtml_legend=1 00:26:42.875 --rc geninfo_all_blocks=1 00:26:42.875 --rc geninfo_unexecuted_blocks=1 00:26:42.875 00:26:42.875 ' 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:42.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.875 --rc genhtml_branch_coverage=1 00:26:42.875 --rc genhtml_function_coverage=1 00:26:42.875 --rc genhtml_legend=1 00:26:42.875 --rc geninfo_all_blocks=1 00:26:42.875 --rc geninfo_unexecuted_blocks=1 00:26:42.875 00:26:42.875 ' 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:42.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.875 --rc genhtml_branch_coverage=1 00:26:42.875 --rc genhtml_function_coverage=1 00:26:42.875 --rc genhtml_legend=1 00:26:42.875 --rc geninfo_all_blocks=1 00:26:42.875 --rc geninfo_unexecuted_blocks=1 00:26:42.875 00:26:42.875 ' 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:42.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.875 --rc genhtml_branch_coverage=1 00:26:42.875 --rc genhtml_function_coverage=1 00:26:42.875 --rc genhtml_legend=1 00:26:42.875 --rc geninfo_all_blocks=1 00:26:42.875 --rc geninfo_unexecuted_blocks=1 00:26:42.875 00:26:42.875 ' 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:42.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:42.875 04:13:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:49.445 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.445 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:49.445 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:49.445 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:49.445 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:49.445 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:49.445 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:49.445 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:49.445 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:49.445 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:49.446 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:49.446 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:49.446 Found net devices under 0000:af:00.0: cvl_0_0 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:49.446 Found net devices under 0000:af:00.1: cvl_0_1 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:49.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:26:49.446 00:26:49.446 --- 10.0.0.2 ping statistics --- 00:26:49.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.446 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:26:49.446 00:26:49.446 --- 10.0.0.1 ping statistics --- 00:26:49.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.446 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.446 04:13:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:49.446 ************************************ 00:26:49.446 START TEST nvmf_digest_clean 00:26:49.446 ************************************ 00:26:49.446 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:49.446 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:49.446 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:49.446 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=201571 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 201571 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 201571 ']' 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.447 [2024-12-10 04:13:48.096975] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:26:49.447 [2024-12-10 04:13:48.097018] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.447 [2024-12-10 04:13:48.173417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.447 [2024-12-10 04:13:48.212488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.447 [2024-12-10 04:13:48.212522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.447 [2024-12-10 04:13:48.212529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.447 [2024-12-10 04:13:48.212535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.447 [2024-12-10 04:13:48.212540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.447 [2024-12-10 04:13:48.213020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.447 null0 00:26:49.447 [2024-12-10 04:13:48.359606] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.447 [2024-12-10 04:13:48.383805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=201612 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 201612 /var/tmp/bperf.sock 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 201612 ']' 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.447 [2024-12-10 04:13:48.438825] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:26:49.447 [2024-12-10 04:13:48.438866] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201612 ] 00:26:49.447 [2024-12-10 04:13:48.513689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.447 [2024-12-10 04:13:48.553820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:49.447 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:49.724 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:49.724 04:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.041 nvme0n1 00:26:50.041 04:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:50.041 04:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:50.041 Running I/O for 2 seconds... 00:26:52.379 25290.00 IOPS, 98.79 MiB/s [2024-12-10T03:13:51.665Z] 25265.00 IOPS, 98.69 MiB/s 00:26:52.379 Latency(us) 00:26:52.379 [2024-12-10T03:13:51.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.379 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:52.379 nvme0n1 : 2.00 25288.00 98.78 0.00 0.00 5056.76 2293.76 13169.62 00:26:52.379 [2024-12-10T03:13:51.665Z] =================================================================================================================== 00:26:52.379 [2024-12-10T03:13:51.665Z] Total : 25288.00 98.78 0.00 0.00 5056.76 2293.76 13169.62 00:26:52.379 { 00:26:52.379 "results": [ 00:26:52.379 { 00:26:52.379 "job": "nvme0n1", 00:26:52.379 "core_mask": "0x2", 00:26:52.379 "workload": "randread", 00:26:52.379 "status": "finished", 00:26:52.379 "queue_depth": 128, 00:26:52.379 "io_size": 4096, 00:26:52.379 "runtime": 2.003243, 00:26:52.379 "iops": 25287.995515271985, 00:26:52.379 "mibps": 98.78123248153119, 00:26:52.379 "io_failed": 0, 00:26:52.379 "io_timeout": 0, 00:26:52.379 "avg_latency_us": 5056.756874126965, 00:26:52.379 "min_latency_us": 2293.76, 00:26:52.379 "max_latency_us": 13169.615238095239 00:26:52.379 } 00:26:52.379 ], 00:26:52.379 "core_count": 1 00:26:52.379 } 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:52.379 | select(.opcode=="crc32c") 00:26:52.379 | "\(.module_name) \(.executed)"' 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 201612 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 201612 ']' 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 201612 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 201612 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 201612' 00:26:52.379 killing process with pid 201612 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 201612 00:26:52.379 Received shutdown signal, test time was about 2.000000 seconds 00:26:52.379 00:26:52.379 Latency(us) 00:26:52.379 [2024-12-10T03:13:51.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.379 [2024-12-10T03:13:51.665Z] =================================================================================================================== 00:26:52.379 [2024-12-10T03:13:51.665Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:52.379 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 201612 00:26:52.638 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:52.638 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=202076 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 202076 /var/tmp/bperf.sock 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 202076 ']' 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:52.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:52.639 [2024-12-10 04:13:51.749364] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:26:52.639 [2024-12-10 04:13:51.749412] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202076 ] 00:26:52.639 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:52.639 Zero copy mechanism will not be used. 00:26:52.639 [2024-12-10 04:13:51.821588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.639 [2024-12-10 04:13:51.857332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:52.639 04:13:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:52.899 04:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.899 04:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.467 nvme0n1 00:26:53.467 04:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:53.467 04:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:53.467 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:53.467 Zero copy mechanism will not be used. 00:26:53.467 Running I/O for 2 seconds... 00:26:55.339 6068.00 IOPS, 758.50 MiB/s [2024-12-10T03:13:54.625Z] 6023.00 IOPS, 752.88 MiB/s 00:26:55.339 Latency(us) 00:26:55.339 [2024-12-10T03:13:54.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.339 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:55.339 nvme0n1 : 2.00 6024.33 753.04 0.00 0.00 2653.35 643.66 6335.15 00:26:55.339 [2024-12-10T03:13:54.625Z] =================================================================================================================== 00:26:55.339 [2024-12-10T03:13:54.625Z] Total : 6024.33 753.04 0.00 0.00 2653.35 643.66 6335.15 00:26:55.339 { 00:26:55.339 "results": [ 00:26:55.339 { 00:26:55.339 "job": "nvme0n1", 00:26:55.339 "core_mask": "0x2", 00:26:55.339 "workload": "randread", 00:26:55.339 "status": "finished", 00:26:55.339 "queue_depth": 16, 00:26:55.339 "io_size": 131072, 00:26:55.339 "runtime": 2.002215, 00:26:55.339 "iops": 6024.32805667723, 00:26:55.339 "mibps": 753.0410070846538, 00:26:55.340 "io_failed": 0, 00:26:55.340 "io_timeout": 0, 00:26:55.340 "avg_latency_us": 2653.349091914, 00:26:55.340 "min_latency_us": 643.6571428571428, 00:26:55.340 "max_latency_us": 6335.1466666666665 00:26:55.340 } 00:26:55.340 ], 00:26:55.340 "core_count": 1 00:26:55.340 } 00:26:55.340 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:55.340 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:55.340 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:55.340 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:55.340 | select(.opcode=="crc32c") 00:26:55.340 | "\(.module_name) \(.executed)"' 00:26:55.340 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 202076 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 202076 ']' 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 202076 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 202076 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 202076' 00:26:55.601 killing process with pid 202076 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 202076 00:26:55.601 Received shutdown signal, test time was about 2.000000 seconds 00:26:55.601 00:26:55.601 Latency(us) 00:26:55.601 [2024-12-10T03:13:54.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.601 [2024-12-10T03:13:54.887Z] =================================================================================================================== 00:26:55.601 [2024-12-10T03:13:54.887Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:55.601 04:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 202076 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=202717 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 202717 /var/tmp/bperf.sock 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 202717 ']' 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:55.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.860 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:55.860 [2024-12-10 04:13:55.068778] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:26:55.860 [2024-12-10 04:13:55.068828] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202717 ] 00:26:56.118 [2024-12-10 04:13:55.144600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.118 [2024-12-10 04:13:55.181949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.118 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.118 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:56.118 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:56.118 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:56.118 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:56.377 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.377 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.635 nvme0n1 00:26:56.635 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:56.635 04:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:56.635 Running I/O for 2 seconds... 00:26:58.949 27558.00 IOPS, 107.65 MiB/s [2024-12-10T03:13:58.235Z] 27619.00 IOPS, 107.89 MiB/s 00:26:58.949 Latency(us) 00:26:58.949 [2024-12-10T03:13:58.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.949 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:58.949 nvme0n1 : 2.01 27619.83 107.89 0.00 0.00 4625.93 2871.10 7240.17 00:26:58.949 [2024-12-10T03:13:58.235Z] =================================================================================================================== 00:26:58.949 [2024-12-10T03:13:58.235Z] Total : 27619.83 107.89 0.00 0.00 4625.93 2871.10 7240.17 00:26:58.949 { 00:26:58.949 "results": [ 00:26:58.949 { 00:26:58.949 "job": "nvme0n1", 00:26:58.949 "core_mask": "0x2", 00:26:58.949 "workload": "randwrite", 00:26:58.949 "status": "finished", 00:26:58.949 "queue_depth": 128, 00:26:58.949 "io_size": 4096, 00:26:58.949 "runtime": 2.005733, 00:26:58.949 "iops": 27619.82776371531, 00:26:58.949 "mibps": 107.88995220201294, 00:26:58.949 "io_failed": 0, 00:26:58.949 "io_timeout": 0, 00:26:58.949 "avg_latency_us": 4625.926755616069, 00:26:58.949 "min_latency_us": 2871.1009523809525, 00:26:58.949 "max_latency_us": 7240.167619047619 00:26:58.949 } 00:26:58.949 ], 00:26:58.949 "core_count": 1 00:26:58.949 } 00:26:58.949 04:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:58.949 04:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:58.949 04:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:58.949 04:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:58.949 | select(.opcode=="crc32c") 00:26:58.949 | "\(.module_name) \(.executed)"' 00:26:58.949 04:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 202717 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 202717 ']' 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 202717 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 202717 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 202717' 00:26:58.949 killing process with pid 202717 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 202717 00:26:58.949 Received shutdown signal, test time was about 2.000000 seconds 00:26:58.949 00:26:58.949 Latency(us) 00:26:58.949 [2024-12-10T03:13:58.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.949 [2024-12-10T03:13:58.235Z] =================================================================================================================== 00:26:58.949 [2024-12-10T03:13:58.235Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.949 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 202717 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=203208 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 203208 /var/tmp/bperf.sock 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 203208 ']' 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:59.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:59.209 [2024-12-10 04:13:58.334823] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:26:59.209 [2024-12-10 04:13:58.334870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203208 ] 00:26:59.209 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:59.209 Zero copy mechanism will not be used. 00:26:59.209 [2024-12-10 04:13:58.409127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.209 [2024-12-10 04:13:58.444598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.209 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:59.468 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:59.468 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:59.468 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:59.468 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.468 04:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.727 nvme0n1 00:26:59.727 04:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:59.727 04:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:59.986 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:59.986 Zero copy mechanism will not be used. 00:26:59.986 Running I/O for 2 seconds... 00:27:01.861 6331.00 IOPS, 791.38 MiB/s [2024-12-10T03:14:01.147Z] 6708.50 IOPS, 838.56 MiB/s 00:27:01.861 Latency(us) 00:27:01.861 [2024-12-10T03:14:01.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.861 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:01.861 nvme0n1 : 2.00 6706.45 838.31 0.00 0.00 2381.79 1716.42 13668.94 00:27:01.861 [2024-12-10T03:14:01.147Z] =================================================================================================================== 00:27:01.861 [2024-12-10T03:14:01.147Z] Total : 6706.45 838.31 0.00 0.00 2381.79 1716.42 13668.94 00:27:01.861 { 00:27:01.861 "results": [ 00:27:01.861 { 00:27:01.861 "job": "nvme0n1", 00:27:01.861 "core_mask": "0x2", 00:27:01.861 "workload": "randwrite", 00:27:01.861 "status": "finished", 00:27:01.861 "queue_depth": 16, 00:27:01.861 "io_size": 131072, 00:27:01.861 "runtime": 2.002998, 00:27:01.861 "iops": 6706.447035893196, 00:27:01.861 "mibps": 838.3058794866495, 00:27:01.861 "io_failed": 0, 00:27:01.861 "io_timeout": 0, 00:27:01.861 "avg_latency_us": 2381.79322478757, 00:27:01.861 "min_latency_us": 1716.4190476190477, 00:27:01.861 "max_latency_us": 13668.937142857143 00:27:01.861 } 00:27:01.861 ], 00:27:01.861 "core_count": 1 00:27:01.861 } 00:27:01.861 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:01.861 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:01.861 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:01.861 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:01.861 | select(.opcode=="crc32c") 00:27:01.861 | "\(.module_name) \(.executed)"' 00:27:01.861 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 203208 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 203208 ']' 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 203208 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203208 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203208' 00:27:02.119 killing process with pid 203208 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 203208 00:27:02.119 Received shutdown signal, test time was about 2.000000 seconds 00:27:02.119 00:27:02.119 Latency(us) 00:27:02.119 [2024-12-10T03:14:01.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.119 [2024-12-10T03:14:01.405Z] =================================================================================================================== 00:27:02.119 [2024-12-10T03:14:01.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:02.119 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 203208 00:27:02.378 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 201571 00:27:02.378 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 201571 ']' 00:27:02.378 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 201571 00:27:02.378 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:02.378 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:02.378 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 201571 00:27:02.378 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:02.378 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:02.378 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 201571' 00:27:02.378 killing process with pid 201571 00:27:02.378 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 201571 00:27:02.378 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 201571 00:27:02.637 00:27:02.637 real 0m13.720s 00:27:02.637 user 0m26.185s 00:27:02.637 sys 0m4.617s 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:02.637 ************************************ 00:27:02.637 END TEST nvmf_digest_clean 00:27:02.637 ************************************ 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:02.637 ************************************ 00:27:02.637 START TEST nvmf_digest_error 00:27:02.637 ************************************ 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=203765 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 203765 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 203765 ']' 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.637 04:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.637 [2024-12-10 04:14:01.884351] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:02.637 [2024-12-10 04:14:01.884393] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.896 [2024-12-10 04:14:01.963342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.896 [2024-12-10 04:14:02.002978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.896 [2024-12-10 04:14:02.003013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.896 [2024-12-10 04:14:02.003021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.896 [2024-12-10 04:14:02.003031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.896 [2024-12-10 04:14:02.003036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.896 [2024-12-10 04:14:02.003548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.896 [2024-12-10 04:14:02.076003] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.896 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.896 null0 00:27:02.896 [2024-12-10 04:14:02.167831] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.156 [2024-12-10 04:14:02.192034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=203929 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 203929 /var/tmp/bperf.sock 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 203929 ']' 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:03.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:03.156 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.156 [2024-12-10 04:14:02.244004] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:03.156 [2024-12-10 04:14:02.244046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203929 ] 00:27:03.156 [2024-12-10 04:14:02.317902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.156 [2024-12-10 04:14:02.356643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.415 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:03.415 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:03.415 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.415 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.415 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:03.415 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.415 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.415 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.415 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.415 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.674 nvme0n1 00:27:03.933 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:03.933 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.933 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.933 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.933 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:03.933 04:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:03.933 Running I/O for 2 seconds... 00:27:03.933 [2024-12-10 04:14:03.083643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.933 [2024-12-10 04:14:03.083673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.933 [2024-12-10 04:14:03.083684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.933 [2024-12-10 04:14:03.095427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.095454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.095466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.104185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.104212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.104221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.114822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.114846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.114854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.124653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.124675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.124682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.132440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.132465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.132474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.141794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.141817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.141826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.151332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.151354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.151363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.162658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.162680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.162688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.172602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.172624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.172633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.182241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.182262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.182270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.190925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.190946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.190954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.200089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.200110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.200117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.934 [2024-12-10 04:14:03.209549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:03.934 [2024-12-10 04:14:03.209570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.934 [2024-12-10 04:14:03.209578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.217939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.217959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.217967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.229211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.229232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.229241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.240001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.240022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.240030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.248810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.248831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.248839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.260512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.260534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.260542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.268416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.268438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.268448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.280104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.280124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.280132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.292486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.292508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.292516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.300552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.300574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.300582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.312009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.312030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.312039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.322591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.322613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.322621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.330916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.330938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.330946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.342635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.342657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.342665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.354821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.354843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.354851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.365792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.365814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.365822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.374070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.374091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.374099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.383192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.383213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.383221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.392442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.392462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.392470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.400968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.400988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.400996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.411964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.411985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.411994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.422360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.422381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.422389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.434380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.434401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.434409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.444963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.444987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.445001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.453374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.453394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.453402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.464530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.464551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.464559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.194 [2024-12-10 04:14:03.474224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.194 [2024-12-10 04:14:03.474244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.194 [2024-12-10 04:14:03.474252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.482996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.483017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.483025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.494478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.494499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.494507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.505554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.505575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.505585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.513570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.513591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.513598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.524296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.524317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.524325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.535446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.535471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.535479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.545228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.545249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.545258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.553571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.553592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.553600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.563560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.563581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.563589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.573973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.573993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.574001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.582302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.582322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.582331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.593249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.593269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.593278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.601554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.601574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.601582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.613776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.454 [2024-12-10 04:14:03.613797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.454 [2024-12-10 04:14:03.613804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.454 [2024-12-10 04:14:03.623941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.623962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.623970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.455 [2024-12-10 04:14:03.631804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.631825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.631833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.455 [2024-12-10 04:14:03.641464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.641484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.641492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.455 [2024-12-10 04:14:03.652402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.652423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.652431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.455 [2024-12-10 04:14:03.662654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.662674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.662682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.455 [2024-12-10 04:14:03.670877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.670897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.670904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.455 [2024-12-10 04:14:03.683007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.683027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.683035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.455 [2024-12-10 04:14:03.691244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.691265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.691273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.455 [2024-12-10 04:14:03.702588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.702609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.702620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.455 [2024-12-10 04:14:03.710934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.710954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.710962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.455 [2024-12-10 04:14:03.720720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.720741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.720749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.455 [2024-12-10 04:14:03.730917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.455 [2024-12-10 04:14:03.730939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.455 [2024-12-10 04:14:03.730947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.714 [2024-12-10 04:14:03.739875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.714 [2024-12-10 04:14:03.739895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.714 [2024-12-10 04:14:03.739903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.714 [2024-12-10 04:14:03.752146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.714 [2024-12-10 04:14:03.752173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.714 [2024-12-10 04:14:03.752182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.714 [2024-12-10 04:14:03.762622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.714 [2024-12-10 04:14:03.762642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.714 [2024-12-10 04:14:03.762650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.714 [2024-12-10 04:14:03.771474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.771494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.771503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.783496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.783517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.783525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.796071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.796096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.796104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.807238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.807260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.807269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.815235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.815255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.815264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.824889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.824910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.824918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.834311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.834332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.834340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.843699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.843719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.843727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.852567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.852588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.852595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.861689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.861710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.861717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.871573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.871594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.871602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.880471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.880491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.880500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.890006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.890027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.890034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.899172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.899193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.899201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.908331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.908352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.908360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.917616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.917637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.917645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.926957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.926978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.926986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.936044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.936065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.936072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.945485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.945505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.945513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.954724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.954745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.954758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.963698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.963718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.963727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.975214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.975234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.975242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.986663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.986684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.986693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.715 [2024-12-10 04:14:03.995066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.715 [2024-12-10 04:14:03.995086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.715 [2024-12-10 04:14:03.995095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.006046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.006067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.006075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.017622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.017643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.017651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.028797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.028817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.028825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.037024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.037044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.037053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.048041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.048065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.048073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.059091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.059111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.059120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 25482.00 IOPS, 99.54 MiB/s [2024-12-10T03:14:04.261Z] [2024-12-10 04:14:04.069224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.069244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.069252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.078692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.078714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.078722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.086676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.086696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.086704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.097670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.097691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.097699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.107472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.107492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.107501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.115588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.115608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.115616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.125582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.125603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.125614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.134757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.134778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.134786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.144075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.144096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.975 [2024-12-10 04:14:04.144104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.975 [2024-12-10 04:14:04.151874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.975 [2024-12-10 04:14:04.151894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.976 [2024-12-10 04:14:04.151902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.976 [2024-12-10 04:14:04.162951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.976 [2024-12-10 04:14:04.162972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.976 [2024-12-10 04:14:04.162980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.976 [2024-12-10 04:14:04.173101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.976 [2024-12-10 04:14:04.173122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.976 [2024-12-10 04:14:04.173130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.976 [2024-12-10 04:14:04.182423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.976 [2024-12-10 04:14:04.182444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.976 [2024-12-10 04:14:04.182451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.976 [2024-12-10 04:14:04.193568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.976 [2024-12-10 04:14:04.193590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.976 [2024-12-10 04:14:04.193598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.976 [2024-12-10 04:14:04.204626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.976 [2024-12-10 04:14:04.204648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.976 [2024-12-10 04:14:04.204656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.976 [2024-12-10 04:14:04.213231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.976 [2024-12-10 04:14:04.213256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.976 [2024-12-10 04:14:04.213264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.976 [2024-12-10 04:14:04.223247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.976 [2024-12-10 04:14:04.223267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.976 [2024-12-10 04:14:04.223275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.976 [2024-12-10 04:14:04.231796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.976 [2024-12-10 04:14:04.231817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.976 [2024-12-10 04:14:04.231825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.976 [2024-12-10 04:14:04.241436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.976 [2024-12-10 04:14:04.241457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.976 [2024-12-10 04:14:04.241465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.976 [2024-12-10 04:14:04.251473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:04.976 [2024-12-10 04:14:04.251494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.976 [2024-12-10 04:14:04.251502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.235 [2024-12-10 04:14:04.260648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.235 [2024-12-10 04:14:04.260669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.235 [2024-12-10 04:14:04.260677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.270376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.270396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.270404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.278620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.278641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.278649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.288859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.288878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.288886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.298296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.298317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.298325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.308020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.308040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.308048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.316520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.316540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.316548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.327110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.327131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.327139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.336769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.336789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.336797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.346506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.346526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.346534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.358907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.358928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.358936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.370093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.370113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.370121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.377850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.377871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.377882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.388060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.388081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.388089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.397487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.397508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.397517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.406915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.406935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.406943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.415440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.415460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.415468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.426057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.426078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.426085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.437471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.437492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.437500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.445474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.445494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.445502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.456977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.456998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.457006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.467496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.467520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.467528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.476372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.476393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.476401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.487829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.487849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.487858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.499936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.499958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.499966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.236 [2024-12-10 04:14:04.511994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.236 [2024-12-10 04:14:04.512016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.236 [2024-12-10 04:14:04.512024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.524548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.524569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.524576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.536892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.536915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.536925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.544819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.544841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.544848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.556736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.556758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.556765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.566541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.566563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.566574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.576148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.576173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.576182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.584285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.584306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.584314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.594314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.594335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.594343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.602476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.602496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.602504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.611752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.611772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.611781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.621620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.621640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.621647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.631275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.631296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.631304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.496 [2024-12-10 04:14:04.640301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.496 [2024-12-10 04:14:04.640322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.496 [2024-12-10 04:14:04.640334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.648658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.648678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.648686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.661506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.661527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.661535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.671957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.671978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.671986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.681883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.681903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.681911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.690345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.690366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.690374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.699613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.699633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.699641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.710022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.710043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.710051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.718865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.718886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.718893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.728921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.728942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.728950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.741643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.741664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.741673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.751108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.751129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.751137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.762030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.762051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.762060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.497 [2024-12-10 04:14:04.772195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.497 [2024-12-10 04:14:04.772217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.497 [2024-12-10 04:14:04.772225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.756 [2024-12-10 04:14:04.784487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.784508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.784516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.793366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.793387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.793395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.804746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.804767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.804775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.812884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.812904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.812915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.823188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.823209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.823217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.834401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.834423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.834430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.842587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.842607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.842615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.854063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.854084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.854093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.866218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.866239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.866246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.877015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.877047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.877055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.885252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.885273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.885281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.896921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.896941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.896949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.906482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.906506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.906514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.914655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.914676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.914684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.924796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.924817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.924825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.936333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.936353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.936361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.944905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.944925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.944933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.953144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.953163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.953176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.964894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.964915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.964923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.976338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.976358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.976366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.984813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.984833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.984841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:04.996435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:04.996457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:04.996465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:05.005273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:05.005294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:05.005302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:05.017033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:05.017054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:05.017063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.757 [2024-12-10 04:14:05.029635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:05.757 [2024-12-10 04:14:05.029655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.757 [2024-12-10 04:14:05.029663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.017 [2024-12-10 04:14:05.040817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:06.017 [2024-12-10 04:14:05.040838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.017 [2024-12-10 04:14:05.040846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.017 [2024-12-10 04:14:05.049803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:06.017 [2024-12-10 04:14:05.049827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.017 [2024-12-10 04:14:05.049836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.017 [2024-12-10 04:14:05.061022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:06.017 [2024-12-10 04:14:05.061044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.017 [2024-12-10 04:14:05.061052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.017 [2024-12-10 04:14:05.071248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d5ae0) 00:27:06.017 [2024-12-10 04:14:05.071269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.017 [2024-12-10 04:14:05.071277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.017 25499.00 IOPS, 99.61 MiB/s 00:27:06.017 Latency(us) 00:27:06.017 [2024-12-10T03:14:05.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.017 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:06.017 nvme0n1 : 2.00 25516.71 99.67 0.00 0.00 5011.71 2122.12 16727.28 00:27:06.017 [2024-12-10T03:14:05.303Z] =================================================================================================================== 00:27:06.017 [2024-12-10T03:14:05.303Z] Total : 25516.71 99.67 0.00 0.00 5011.71 2122.12 16727.28 00:27:06.017 { 00:27:06.017 "results": [ 00:27:06.017 { 00:27:06.017 "job": "nvme0n1", 00:27:06.017 "core_mask": "0x2", 00:27:06.017 "workload": "randread", 00:27:06.017 "status": "finished", 00:27:06.017 "queue_depth": 128, 00:27:06.017 "io_size": 4096, 00:27:06.017 "runtime": 2.003628, 00:27:06.017 "iops": 25516.712683192687, 00:27:06.017 "mibps": 99.67465891872143, 00:27:06.017 "io_failed": 0, 00:27:06.017 "io_timeout": 0, 00:27:06.017 "avg_latency_us": 5011.712214118993, 00:27:06.017 "min_latency_us": 2122.118095238095, 00:27:06.017 "max_latency_us": 16727.28380952381 00:27:06.017 } 00:27:06.017 ], 00:27:06.017 "core_count": 1 00:27:06.017 } 00:27:06.017 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:06.017 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:06.017 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:06.017 | .driver_specific 00:27:06.017 | .nvme_error 00:27:06.017 | .status_code 00:27:06.017 | .command_transient_transport_error' 00:27:06.017 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:06.017 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:27:06.017 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 203929 00:27:06.017 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 203929 ']' 00:27:06.017 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 203929 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203929 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203929' 00:27:06.275 killing process with pid 203929 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 203929 00:27:06.275 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.275 00:27:06.275 Latency(us) 00:27:06.275 [2024-12-10T03:14:05.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.275 [2024-12-10T03:14:05.561Z] =================================================================================================================== 00:27:06.275 [2024-12-10T03:14:05.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 203929 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=204394 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 204394 /var/tmp/bperf.sock 00:27:06.275 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:06.276 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 204394 ']' 00:27:06.276 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:06.276 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.276 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:06.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:06.276 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.276 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.534 [2024-12-10 04:14:05.558291] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:06.534 [2024-12-10 04:14:05.558341] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204394 ] 00:27:06.534 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:06.534 Zero copy mechanism will not be used. 00:27:06.534 [2024-12-10 04:14:05.630459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.534 [2024-12-10 04:14:05.666425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.534 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.534 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:06.534 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:06.535 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:06.794 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:06.794 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.794 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.794 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.794 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.794 04:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.053 nvme0n1 00:27:07.053 04:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:07.053 04:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.053 04:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.053 04:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.053 04:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:07.053 04:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:07.053 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:07.053 Zero copy mechanism will not be used. 00:27:07.053 Running I/O for 2 seconds... 00:27:07.053 [2024-12-10 04:14:06.334639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.053 [2024-12-10 04:14:06.334675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.053 [2024-12-10 04:14:06.334686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.313 [2024-12-10 04:14:06.340221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.313 [2024-12-10 04:14:06.340248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.313 [2024-12-10 04:14:06.340258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.313 [2024-12-10 04:14:06.345827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.313 [2024-12-10 04:14:06.345850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.313 [2024-12-10 04:14:06.345858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.313 [2024-12-10 04:14:06.351438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.313 [2024-12-10 04:14:06.351460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.313 [2024-12-10 04:14:06.351469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.313 [2024-12-10 04:14:06.357131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.313 [2024-12-10 04:14:06.357155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.313 [2024-12-10 04:14:06.357164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.313 [2024-12-10 04:14:06.363805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.313 [2024-12-10 04:14:06.363829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.313 [2024-12-10 04:14:06.363838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.313 [2024-12-10 04:14:06.371697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.313 [2024-12-10 04:14:06.371721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.313 [2024-12-10 04:14:06.371730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.313 [2024-12-10 04:14:06.378900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.313 [2024-12-10 04:14:06.378924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.313 [2024-12-10 04:14:06.378933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.313 [2024-12-10 04:14:06.386102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.313 [2024-12-10 04:14:06.386126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.313 [2024-12-10 04:14:06.386134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.313 [2024-12-10 04:14:06.391920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.313 [2024-12-10 04:14:06.391943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.313 [2024-12-10 04:14:06.391951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.313 [2024-12-10 04:14:06.397463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.313 [2024-12-10 04:14:06.397486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.313 [2024-12-10 04:14:06.397494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.313 [2024-12-10 04:14:06.402892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.313 [2024-12-10 04:14:06.402912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.313 [2024-12-10 04:14:06.402921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.408208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.408229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.408237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.413464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.413487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.413495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.418851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.418873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.418881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.424295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.424316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.424324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.429830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.429852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.429864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.435106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.435127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.435135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.437950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.437972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.437980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.443253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.443274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.443284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.448498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.448519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.448527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.453775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.453796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.453804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.459099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.459121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.459129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.464488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.464510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.464518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.469859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.469880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.469888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.475098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.475123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.475131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.480708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.480730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.480738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.486038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.486059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.486067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.491456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.491478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.491486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.496944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.496966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.496974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.502177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.502198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.502206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.507602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.507624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.507632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.512555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.512577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.512585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.516633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.516655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.516663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.521799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.521820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.521828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.526956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.526981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.526991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.531843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.314 [2024-12-10 04:14:06.531864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.314 [2024-12-10 04:14:06.531872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.314 [2024-12-10 04:14:06.537100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.315 [2024-12-10 04:14:06.537121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.315 [2024-12-10 04:14:06.537129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.315 [2024-12-10 04:14:06.542091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.315 [2024-12-10 04:14:06.542113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.315 [2024-12-10 04:14:06.542121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.315 [2024-12-10 04:14:06.547476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.315 [2024-12-10 04:14:06.547497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.315 [2024-12-10 04:14:06.547505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.315 [2024-12-10 04:14:06.552866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.315 [2024-12-10 04:14:06.552889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.315 [2024-12-10 04:14:06.552897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.315 [2024-12-10 04:14:06.558401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.315 [2024-12-10 04:14:06.558423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.315 [2024-12-10 04:14:06.558431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.315 [2024-12-10 04:14:06.563746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.315 [2024-12-10 04:14:06.563768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.315 [2024-12-10 04:14:06.563782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.315 [2024-12-10 04:14:06.569129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.315 [2024-12-10 04:14:06.569150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.315 [2024-12-10 04:14:06.569159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.315 [2024-12-10 04:14:06.574522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.315 [2024-12-10 04:14:06.574544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.315 [2024-12-10 04:14:06.574551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.315 [2024-12-10 04:14:06.579737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.315 [2024-12-10 04:14:06.579758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.315 [2024-12-10 04:14:06.579766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.315 [2024-12-10 04:14:06.585070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.315 [2024-12-10 04:14:06.585092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.315 [2024-12-10 04:14:06.585100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.315 [2024-12-10 04:14:06.590828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.315 [2024-12-10 04:14:06.590850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.315 [2024-12-10 04:14:06.590859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.596353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.596375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.596383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.601867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.601889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.601896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.607318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.607339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.607347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.612604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.612629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.612638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.618063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.618084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.618092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.623319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.623341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.623348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.628676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.628698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.628705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.634093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.634114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.634122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.639352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.639374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.639381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.644637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.644658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.644666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.650089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.650111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.650119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.655639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.655660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.655669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.661258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.661278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.661286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.666600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.666622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.666630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.672058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.672080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.672088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.677566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.677587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.575 [2024-12-10 04:14:06.677595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.575 [2024-12-10 04:14:06.682816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.575 [2024-12-10 04:14:06.682837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.682845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.688135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.688156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.688164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.693463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.693484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.693492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.699643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.699665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.699673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.706963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.706986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.706999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.714609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.714631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.714640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.720885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.720907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.720915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.727325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.727347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.727355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.733746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.733768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.733776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.740104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.740126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.740134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.746606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.746628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.746636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.750911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.750932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.750940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.755917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.755939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.755946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.762753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.762775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.762783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.769303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.769325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.769333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.775104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.775126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.775135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.780484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.780506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.780514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.785452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.785475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.785483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.791010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.791032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.791041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.796446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.796467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.796475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.802144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.802170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.802179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.808743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.808765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.808778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.815513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.815536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.815544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.823324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.823347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.823355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.831427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.831450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.831459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.840032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.840055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.840063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.847774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.847797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.576 [2024-12-10 04:14:06.847806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.576 [2024-12-10 04:14:06.855528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.576 [2024-12-10 04:14:06.855551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.577 [2024-12-10 04:14:06.855560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.863212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.863234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.863242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.871214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.871237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.871245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.879447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.879474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.879483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.887157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.887184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.887192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.894608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.894631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.894639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.902639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.902661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.902670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.910358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.910382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.910391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.918456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.918479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.918487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.925539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.925561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.925569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.930967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.930989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.930997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.936156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.936184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.936193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.941363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.941384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.941392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.946593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.946614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.946622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.951844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.951864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.951872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.957027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.957048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.957056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.962233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.962255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.962262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.967393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.967413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.967421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.972684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.972705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.972713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.977833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.977853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.977860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.982987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.983009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.983021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.988154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.988181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.837 [2024-12-10 04:14:06.988189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.837 [2024-12-10 04:14:06.993384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.837 [2024-12-10 04:14:06.993405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:06.993413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:06.998608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:06.998630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:06.998638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.003783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.003804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.003812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.008951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.008972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.008980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.014123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.014147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.014156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.019264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.019286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.019295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.024394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.024415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.024422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.029553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.029577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.029586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.034745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.034767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.034775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.039908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.039929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.039937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.045060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.045080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.045088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.050272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.050293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.050301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.055437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.055458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.055466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.060620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.060640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.060648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.065814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.065834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.065842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.071009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.071030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.071041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.076143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.076164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.076178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.081214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.081235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.081243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.086333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.086355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.086363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.091448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.091470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.091478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.096721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.096743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.096752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.102015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.102037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.102046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.107250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.107270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.107279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.112383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.112404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.112412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.838 [2024-12-10 04:14:07.117604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:07.838 [2024-12-10 04:14:07.117629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.838 [2024-12-10 04:14:07.117637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.098 [2024-12-10 04:14:07.122742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.098 [2024-12-10 04:14:07.122764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.122772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.127854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.127876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.127883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.133008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.133029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.133037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.138183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.138204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.138212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.143360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.143382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.143390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.148463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.148484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.148492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.153643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.153664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.153672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.158814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.158836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.158845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.164039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.164060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.164068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.169275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.169296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.169305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.174460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.174481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.174489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.179622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.179643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.179652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.184772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.184794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.184802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.189883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.189904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.189912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.195052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.195073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.195081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.200184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.200205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.200213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.205318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.205338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.205349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.210447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.210470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.210478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.215717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.215739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.215747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.220893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.220914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.220922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.226043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.226065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.226073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.231200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.231222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.231230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.236237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.236258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.236266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.241305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.241327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.241335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.246401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.246422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.246429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.251520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.251543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.251551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.256610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.099 [2024-12-10 04:14:07.256632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.099 [2024-12-10 04:14:07.256639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.099 [2024-12-10 04:14:07.261804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.261825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.261833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.266956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.266977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.266985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.272058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.272080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.272088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.277172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.277193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.277202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.282394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.282416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.282424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.287563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.287585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.287593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.292760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.292780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.292788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.297975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.297995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.298003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.303148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.303175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.303183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.308245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.308266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.308274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.313335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.313356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.313364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.318413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.318434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.318442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.323509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.323531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.323539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.100 5515.00 IOPS, 689.38 MiB/s [2024-12-10T03:14:07.386Z] [2024-12-10 04:14:07.329853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.329874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.329882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.334973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.334996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.335006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.340098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.340124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.340133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.345234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.345255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.345263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.350430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.350452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.350461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.355689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.355711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.355719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.360899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.360922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.360931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.366091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.366114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.366122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.371302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.371325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.371333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.100 [2024-12-10 04:14:07.376523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.100 [2024-12-10 04:14:07.376545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.100 [2024-12-10 04:14:07.376553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.359 [2024-12-10 04:14:07.381782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.359 [2024-12-10 04:14:07.381804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.359 [2024-12-10 04:14:07.381812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.359 [2024-12-10 04:14:07.387076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.359 [2024-12-10 04:14:07.387098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.359 [2024-12-10 04:14:07.387106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.359 [2024-12-10 04:14:07.392164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.359 [2024-12-10 04:14:07.392191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.359 [2024-12-10 04:14:07.392199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.359 [2024-12-10 04:14:07.397239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.359 [2024-12-10 04:14:07.397261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.359 [2024-12-10 04:14:07.397269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.359 [2024-12-10 04:14:07.402409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.359 [2024-12-10 04:14:07.402432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.359 [2024-12-10 04:14:07.402440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.359 [2024-12-10 04:14:07.407366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.359 [2024-12-10 04:14:07.407388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.407397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.412307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.412329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.412337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.417258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.417279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.417288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.422259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.422281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.422290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.427294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.427315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.427327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.432323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.432345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.432353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.437525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.437547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.437555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.442678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.442701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.442709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.447894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.447915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.447923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.453005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.453027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.453035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.458121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.458142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.458152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.463648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.463669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.463677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.469808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.469829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.469837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.475580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.475606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.475614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.483102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.483126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.483134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.489887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.489909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.489918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.496081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.496105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.496113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.501899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.501922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.501930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.508130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.508154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.508163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.516384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.516408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.516417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.523074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.523097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.523106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.529895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.529919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.529928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.536576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.536601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.536610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.544884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.544908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.544918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.549088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.549111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.549120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.556616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.556639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.556648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.564599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.564622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.564631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.572476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.572499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.572507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.580723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.580745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.580753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.586859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.586881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.586890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.592186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.592209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.592222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.597744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.597766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.597775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.603265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.603290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.603299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.608855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.608878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.608887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.614210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.614233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.614242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.619619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.619642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.619651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.624268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.624290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.624299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.629553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.629576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.629584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.634797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.634819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.634827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.360 [2024-12-10 04:14:07.640019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.360 [2024-12-10 04:14:07.640042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.360 [2024-12-10 04:14:07.640050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.645261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.645284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.645291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.650520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.650541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.650549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.655731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.655752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.655760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.660963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.660983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.660991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.666201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.666223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.666231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.671318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.671339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.671347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.676379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.676400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.676408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.681574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.681595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.681608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.686727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.686748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.686756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.691861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.691883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.691892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.697039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.697061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.697068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.702454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.702476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.702484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.708246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.708267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.708275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.713413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.713436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.713444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.718581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.718603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.718612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.723697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.723718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.723726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.728888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.620 [2024-12-10 04:14:07.728914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.620 [2024-12-10 04:14:07.728922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.620 [2024-12-10 04:14:07.734032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.734054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.734062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.739220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.739241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.739250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.744329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.744351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.744359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.749504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.749526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.749534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.754660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.754682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.754690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.759772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.759794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.759802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.764987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.765008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.765016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.770259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.770280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.770289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.775436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.775458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.775466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.780595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.780616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.780624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.785701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.785722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.785729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.790837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.790858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.790865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.795993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.796014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.796022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.801129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.801150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.801158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.806220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.806241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.806249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.811352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.811374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.811382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.816574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.816595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.816606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.821759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.821780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.821787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.826897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.826917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.826925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.832070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.832091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.832099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.837195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.837216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.837224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.842319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.842340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.842347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.847471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.847492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.847499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.852606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.852627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.852635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.857862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.857883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.857891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.863081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.863106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.863114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.868266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.868287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.868295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.621 [2024-12-10 04:14:07.873290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.621 [2024-12-10 04:14:07.873310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.621 [2024-12-10 04:14:07.873318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.622 [2024-12-10 04:14:07.878581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.622 [2024-12-10 04:14:07.878602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.622 [2024-12-10 04:14:07.878609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.622 [2024-12-10 04:14:07.883756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.622 [2024-12-10 04:14:07.883776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.622 [2024-12-10 04:14:07.883784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.622 [2024-12-10 04:14:07.888812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.622 [2024-12-10 04:14:07.888834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.622 [2024-12-10 04:14:07.888841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.622 [2024-12-10 04:14:07.893960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.622 [2024-12-10 04:14:07.893981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.622 [2024-12-10 04:14:07.893989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.622 [2024-12-10 04:14:07.899137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.622 [2024-12-10 04:14:07.899159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.622 [2024-12-10 04:14:07.899175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.904306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.904328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.904335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.909466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.909488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.909496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.914573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.914595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.914604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.919770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.919793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.919801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.924925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.924946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.924954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.930023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.930044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.930052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.935187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.935208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.935218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.940361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.940383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.940391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.945514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.945535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.945544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.950378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.950399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.950410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.955375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.955397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.955405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.960427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.960448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.882 [2024-12-10 04:14:07.960457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.882 [2024-12-10 04:14:07.965557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.882 [2024-12-10 04:14:07.965578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:07.965586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:07.970746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:07.970766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:07.970774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:07.975947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:07.975968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:07.975975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:07.981181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:07.981203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:07.981211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:07.986640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:07.986662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:07.986670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:07.992078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:07.992099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:07.992107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:07.997542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:07.997565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:07.997574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.002856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.002879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.002890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.008308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.008330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.008338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.013802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.013823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.013831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.019146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.019174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.019183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.024507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.024528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.024536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.029807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.029829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.029836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.035133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.035154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.035162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.040911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.040932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.040943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.046579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.046601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.046609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.052077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.052098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.052106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.057786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.057808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.057816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.063597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.063618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.063626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.069086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.069108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.069116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.074424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.074445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.074454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.079752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.079773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.079781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.084925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.084947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.084954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.090081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.090107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.090114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.095420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.095441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.095449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.100808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.100828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.100836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.106423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.106445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.106454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.883 [2024-12-10 04:14:08.111902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.883 [2024-12-10 04:14:08.111923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-12-10 04:14:08.111931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.884 [2024-12-10 04:14:08.117418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.884 [2024-12-10 04:14:08.117441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-12-10 04:14:08.117449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.884 [2024-12-10 04:14:08.122956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.884 [2024-12-10 04:14:08.122979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-12-10 04:14:08.122987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.884 [2024-12-10 04:14:08.128295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.884 [2024-12-10 04:14:08.128317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-12-10 04:14:08.128325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.884 [2024-12-10 04:14:08.134454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.884 [2024-12-10 04:14:08.134475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-12-10 04:14:08.134483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.884 [2024-12-10 04:14:08.138497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.884 [2024-12-10 04:14:08.138519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-12-10 04:14:08.138527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.884 [2024-12-10 04:14:08.145224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.884 [2024-12-10 04:14:08.145246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-12-10 04:14:08.145254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.884 [2024-12-10 04:14:08.153138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.884 [2024-12-10 04:14:08.153160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-12-10 04:14:08.153174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.884 [2024-12-10 04:14:08.159971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:08.884 [2024-12-10 04:14:08.159992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-12-10 04:14:08.160001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.165337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.165358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.165366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.170810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.170831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.170839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.176270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.176290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.176299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.181803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.181825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.181833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.187264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.187284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.187296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.192688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.192709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.192717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.198216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.198237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.198246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.203681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.203702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.203710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.209173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.209193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.209202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.214319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.214342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.214351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.219617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.219639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.219648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.224895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.224916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.224925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.230251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.230281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.230290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.235444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.235468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.235477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.241022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.241042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.241050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.246340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.246361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.246369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.251439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.251459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.251467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.256493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.256513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.256521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.261593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.261612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.261620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.266717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.266738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.266746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.271760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.271780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.271787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.277165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.277194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.277202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.282513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.282535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.282543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.287920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.287941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.287950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.293207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.144 [2024-12-10 04:14:08.293229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-12-10 04:14:08.293238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.144 [2024-12-10 04:14:08.298190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.145 [2024-12-10 04:14:08.298211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-12-10 04:14:08.298219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.145 [2024-12-10 04:14:08.303531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.145 [2024-12-10 04:14:08.303552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-12-10 04:14:08.303560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.145 [2024-12-10 04:14:08.308781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.145 [2024-12-10 04:14:08.308802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-12-10 04:14:08.308810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.145 [2024-12-10 04:14:08.314089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.145 [2024-12-10 04:14:08.314110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-12-10 04:14:08.314117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.145 [2024-12-10 04:14:08.319306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.145 [2024-12-10 04:14:08.319326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-12-10 04:14:08.319333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.145 [2024-12-10 04:14:08.324567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.145 [2024-12-10 04:14:08.324592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-12-10 04:14:08.324602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.145 [2024-12-10 04:14:08.329819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b6a0) 00:27:09.145 [2024-12-10 04:14:08.329841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-12-10 04:14:08.329849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.145 5611.00 IOPS, 701.38 MiB/s 00:27:09.145 Latency(us) 00:27:09.145 [2024-12-10T03:14:08.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.145 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:09.145 nvme0n1 : 2.00 5610.27 701.28 0.00 0.00 2849.33 470.06 12295.80 00:27:09.145 [2024-12-10T03:14:08.431Z] =================================================================================================================== 00:27:09.145 [2024-12-10T03:14:08.431Z] Total : 5610.27 701.28 0.00 0.00 2849.33 470.06 12295.80 00:27:09.145 { 00:27:09.145 "results": [ 00:27:09.145 { 00:27:09.145 "job": "nvme0n1", 00:27:09.145 "core_mask": "0x2", 00:27:09.145 "workload": "randread", 00:27:09.145 "status": "finished", 00:27:09.145 "queue_depth": 16, 00:27:09.145 "io_size": 131072, 00:27:09.145 "runtime": 2.003112, 00:27:09.145 "iops": 5610.2704192276815, 00:27:09.145 "mibps": 701.2838024034602, 00:27:09.145 "io_failed": 0, 00:27:09.145 "io_timeout": 0, 00:27:09.145 "avg_latency_us": 2849.3326973957405, 00:27:09.145 "min_latency_us": 470.0647619047619, 00:27:09.145 "max_latency_us": 12295.801904761905 00:27:09.145 } 00:27:09.145 ], 00:27:09.145 "core_count": 1 00:27:09.145 } 00:27:09.145 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:09.145 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:09.145 | .driver_specific 00:27:09.145 | .nvme_error 00:27:09.145 | .status_code 00:27:09.145 | .command_transient_transport_error' 00:27:09.145 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:09.145 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 363 > 0 )) 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 204394 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 204394 ']' 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 204394 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 204394 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 204394' 00:27:09.404 killing process with pid 204394 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 204394 00:27:09.404 Received shutdown signal, test time was about 2.000000 seconds 00:27:09.404 00:27:09.404 Latency(us) 00:27:09.404 [2024-12-10T03:14:08.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.404 [2024-12-10T03:14:08.690Z] =================================================================================================================== 00:27:09.404 [2024-12-10T03:14:08.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.404 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 204394 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=204856 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 204856 /var/tmp/bperf.sock 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 204856 ']' 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:09.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.664 04:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.664 [2024-12-10 04:14:08.820747] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:09.664 [2024-12-10 04:14:08.820795] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204856 ] 00:27:09.664 [2024-12-10 04:14:08.893278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.664 [2024-12-10 04:14:08.929081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.923 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.923 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:09.923 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.923 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:10.182 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:10.182 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.182 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.182 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.182 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.182 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.441 nvme0n1 00:27:10.441 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:10.441 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.441 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.441 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.441 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:10.441 04:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:10.700 Running I/O for 2 seconds... 00:27:10.700 [2024-12-10 04:14:09.759748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee23b8 00:27:10.700 [2024-12-10 04:14:09.760649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.700 [2024-12-10 04:14:09.760682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:10.700 [2024-12-10 04:14:09.768460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef6890 00:27:10.700 [2024-12-10 04:14:09.769353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.700 [2024-12-10 04:14:09.769377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:10.700 [2024-12-10 04:14:09.779748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef20d8 00:27:10.700 [2024-12-10 04:14:09.781126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.700 [2024-12-10 04:14:09.781148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:10.700 [2024-12-10 04:14:09.786355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eebb98 00:27:10.700 [2024-12-10 04:14:09.787017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.700 [2024-12-10 04:14:09.787038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:10.700 [2024-12-10 04:14:09.797426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef9f68 00:27:10.700 [2024-12-10 04:14:09.798451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.700 [2024-12-10 04:14:09.798471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:10.700 [2024-12-10 04:14:09.806980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eed0b0 00:27:10.700 [2024-12-10 04:14:09.808255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.700 [2024-12-10 04:14:09.808275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:10.700 [2024-12-10 04:14:09.815153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef5be8 00:27:10.700 [2024-12-10 04:14:09.816421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.816441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.822867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee3498 00:27:10.701 [2024-12-10 04:14:09.823534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.823553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.832250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efa3a0 00:27:10.701 [2024-12-10 04:14:09.833058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.833078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.843263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ede038 00:27:10.701 [2024-12-10 04:14:09.844464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.844484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.849927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef92c0 00:27:10.701 [2024-12-10 04:14:09.850601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.850620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.859327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee0630 00:27:10.701 [2024-12-10 04:14:09.860106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.860125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.868778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efa7d8 00:27:10.701 [2024-12-10 04:14:09.869710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.869730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.877937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef6cc8 00:27:10.701 [2024-12-10 04:14:09.878409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.878428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.887315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee7818 00:27:10.701 [2024-12-10 04:14:09.887898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.887917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.896012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efc560 00:27:10.701 [2024-12-10 04:14:09.896924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.896944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.905618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efc128 00:27:10.701 [2024-12-10 04:14:09.906676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.906695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.915286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016edece0 00:27:10.701 [2024-12-10 04:14:09.916483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.916502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.923963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efb8b8 00:27:10.701 [2024-12-10 04:14:09.924891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.924911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.932978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef7538 00:27:10.701 [2024-12-10 04:14:09.933698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.933718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.942592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee99d8 00:27:10.701 [2024-12-10 04:14:09.943642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.943662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.951558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eeb328 00:27:10.701 [2024-12-10 04:14:09.952621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.952640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.960088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee5ec8 00:27:10.701 [2024-12-10 04:14:09.961081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.961102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.968652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eed0b0 00:27:10.701 [2024-12-10 04:14:09.969485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.969508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:10.701 [2024-12-10 04:14:09.977348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efd640 00:27:10.701 [2024-12-10 04:14:09.978189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.701 [2024-12-10 04:14:09.978209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:09.986756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee6fa8 00:27:10.961 [2024-12-10 04:14:09.987594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:09.987624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:09.997430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef8e88 00:27:10.961 [2024-12-10 04:14:09.998605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:09.998627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.005084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef2948 00:27:10.961 [2024-12-10 04:14:10.005936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.005959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.019067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ede470 00:27:10.961 [2024-12-10 04:14:10.020509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.020532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.029028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eeee38 00:27:10.961 [2024-12-10 04:14:10.031306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.031329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.036474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016edfdc0 00:27:10.961 [2024-12-10 04:14:10.037319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.037341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.047814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efef90 00:27:10.961 [2024-12-10 04:14:10.049179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.049199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.057484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee0630 00:27:10.961 [2024-12-10 04:14:10.058965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.058985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.066751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eebfd0 00:27:10.961 [2024-12-10 04:14:10.067843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.067863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.075703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee73e0 00:27:10.961 [2024-12-10 04:14:10.076838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.076858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.084931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee6300 00:27:10.961 [2024-12-10 04:14:10.086028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.086047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.094142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ede470 00:27:10.961 [2024-12-10 04:14:10.095248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.095267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.103331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee49b0 00:27:10.961 [2024-12-10 04:14:10.104464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.104483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.111972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef0350 00:27:10.961 [2024-12-10 04:14:10.113467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.113487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.120199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eeb760 00:27:10.961 [2024-12-10 04:14:10.120954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.120973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.130135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ede8a8 00:27:10.961 [2024-12-10 04:14:10.130871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.130890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:10.961 [2024-12-10 04:14:10.139490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eef6a8 00:27:10.961 [2024-12-10 04:14:10.140246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.961 [2024-12-10 04:14:10.140266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.962 [2024-12-10 04:14:10.148717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef96f8 00:27:10.962 [2024-12-10 04:14:10.149473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.962 [2024-12-10 04:14:10.149493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.962 [2024-12-10 04:14:10.157901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eeaef0 00:27:10.962 [2024-12-10 04:14:10.158657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.962 [2024-12-10 04:14:10.158676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.962 [2024-12-10 04:14:10.167251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef0bc0 00:27:10.962 [2024-12-10 04:14:10.168012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.962 [2024-12-10 04:14:10.168031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.962 [2024-12-10 04:14:10.176452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee3d08 00:27:10.962 [2024-12-10 04:14:10.177192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.962 [2024-12-10 04:14:10.177212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.962 [2024-12-10 04:14:10.185643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee5a90 00:27:10.962 [2024-12-10 04:14:10.186401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.962 [2024-12-10 04:14:10.186420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.962 [2024-12-10 04:14:10.194893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee9e10 00:27:10.962 [2024-12-10 04:14:10.195649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.962 [2024-12-10 04:14:10.195668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.962 [2024-12-10 04:14:10.204068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee6738 00:27:10.962 [2024-12-10 04:14:10.204850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.962 [2024-12-10 04:14:10.204871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.962 [2024-12-10 04:14:10.213427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efa3a0 00:27:10.962 [2024-12-10 04:14:10.214198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.962 [2024-12-10 04:14:10.214218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.962 [2024-12-10 04:14:10.222680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef8a50 00:27:10.962 [2024-12-10 04:14:10.223416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.962 [2024-12-10 04:14:10.223436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.962 [2024-12-10 04:14:10.231854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eed4e8 00:27:10.962 [2024-12-10 04:14:10.232509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.962 [2024-12-10 04:14:10.232529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.962 [2024-12-10 04:14:10.241086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee5220 00:27:10.962 [2024-12-10 04:14:10.241899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.962 [2024-12-10 04:14:10.241919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.221 [2024-12-10 04:14:10.250327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee0ea0 00:27:11.221 [2024-12-10 04:14:10.251067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.221 [2024-12-10 04:14:10.251086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.221 [2024-12-10 04:14:10.259524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef20d8 00:27:11.221 [2024-12-10 04:14:10.260251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.221 [2024-12-10 04:14:10.260270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.221 [2024-12-10 04:14:10.269959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eed0b0 00:27:11.221 [2024-12-10 04:14:10.271098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.221 [2024-12-10 04:14:10.271118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.221 [2024-12-10 04:14:10.277916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016edfdc0 00:27:11.221 [2024-12-10 04:14:10.278580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.221 [2024-12-10 04:14:10.278601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.221 [2024-12-10 04:14:10.287451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef35f0 00:27:11.221 [2024-12-10 04:14:10.288323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.221 [2024-12-10 04:14:10.288343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.221 [2024-12-10 04:14:10.298339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee6fa8 00:27:11.221 [2024-12-10 04:14:10.299774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.221 [2024-12-10 04:14:10.299800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.221 [2024-12-10 04:14:10.304962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee95a0 00:27:11.221 [2024-12-10 04:14:10.305592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.305612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.315783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee7818 00:27:11.222 [2024-12-10 04:14:10.316664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.316683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.324931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efbcf0 00:27:11.222 [2024-12-10 04:14:10.325800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.325820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.334040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee8088 00:27:11.222 [2024-12-10 04:14:10.334994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.335014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.343237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee8088 00:27:11.222 [2024-12-10 04:14:10.344096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.344116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.352703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee0a68 00:27:11.222 [2024-12-10 04:14:10.353811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.353831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.362083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eed920 00:27:11.222 [2024-12-10 04:14:10.362833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.362853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.370525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eeea00 00:27:11.222 [2024-12-10 04:14:10.371392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.371412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.379628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eedd58 00:27:11.222 [2024-12-10 04:14:10.380386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.380404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.389289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee8088 00:27:11.222 [2024-12-10 04:14:10.390156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.390183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.398634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef7100 00:27:11.222 [2024-12-10 04:14:10.399616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.399638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.408085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eed4e8 00:27:11.222 [2024-12-10 04:14:10.409145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.409165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.417753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee23b8 00:27:11.222 [2024-12-10 04:14:10.418929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.418950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.427474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efe720 00:27:11.222 [2024-12-10 04:14:10.428831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.428851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.437144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef7100 00:27:11.222 [2024-12-10 04:14:10.438685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.438704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.443653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef7970 00:27:11.222 [2024-12-10 04:14:10.444362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.444382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.452787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef7da8 00:27:11.222 [2024-12-10 04:14:10.453620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.453642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.462445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef2d80 00:27:11.222 [2024-12-10 04:14:10.463403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.463423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.472143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee6b70 00:27:11.222 [2024-12-10 04:14:10.473134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.473153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.481793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef81e0 00:27:11.222 [2024-12-10 04:14:10.482984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.483004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.490322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee27f0 00:27:11.222 [2024-12-10 04:14:10.491393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.491413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.222 [2024-12-10 04:14:10.499290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee84c0 00:27:11.222 [2024-12-10 04:14:10.499926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.222 [2024-12-10 04:14:10.499945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.508539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef81e0 00:27:11.482 [2024-12-10 04:14:10.509180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.509200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.517678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee7c50 00:27:11.482 [2024-12-10 04:14:10.518315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.518334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.526120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef1ca0 00:27:11.482 [2024-12-10 04:14:10.526858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.526878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.537460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee01f8 00:27:11.482 [2024-12-10 04:14:10.538665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.538689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.546933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef7da8 00:27:11.482 [2024-12-10 04:14:10.547700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.547719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.555863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef2d80 00:27:11.482 [2024-12-10 04:14:10.556823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.556842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.563980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee01f8 00:27:11.482 [2024-12-10 04:14:10.564820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.564839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.574255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef81e0 00:27:11.482 [2024-12-10 04:14:10.575252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.575272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.583723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee5220 00:27:11.482 [2024-12-10 04:14:10.584804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.584824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.593057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efd640 00:27:11.482 [2024-12-10 04:14:10.594159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.594182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.602228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef9f68 00:27:11.482 [2024-12-10 04:14:10.603330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.603350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.610731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef92c0 00:27:11.482 [2024-12-10 04:14:10.611832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.611851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.620100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee4140 00:27:11.482 [2024-12-10 04:14:10.620767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.620788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.630507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef7100 00:27:11.482 [2024-12-10 04:14:10.631840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.631859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.639359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef31b8 00:27:11.482 [2024-12-10 04:14:10.640676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.640695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.647913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee3d08 00:27:11.482 [2024-12-10 04:14:10.648891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.648910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.656947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eeaef0 00:27:11.482 [2024-12-10 04:14:10.657922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.482 [2024-12-10 04:14:10.657941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.482 [2024-12-10 04:14:10.666156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef7da8 00:27:11.482 [2024-12-10 04:14:10.667128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.483 [2024-12-10 04:14:10.667147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.483 [2024-12-10 04:14:10.675351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef57b0 00:27:11.483 [2024-12-10 04:14:10.676295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.483 [2024-12-10 04:14:10.676314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.483 [2024-12-10 04:14:10.684255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eef6a8 00:27:11.483 [2024-12-10 04:14:10.685229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.483 [2024-12-10 04:14:10.685248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.483 [2024-12-10 04:14:10.693227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef92c0 00:27:11.483 [2024-12-10 04:14:10.694187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.483 [2024-12-10 04:14:10.694206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.483 [2024-12-10 04:14:10.702149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ede038 00:27:11.483 [2024-12-10 04:14:10.703093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.483 [2024-12-10 04:14:10.703112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.483 [2024-12-10 04:14:10.711114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efb048 00:27:11.483 [2024-12-10 04:14:10.711995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.483 [2024-12-10 04:14:10.712013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.483 [2024-12-10 04:14:10.720086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef96f8 00:27:11.483 [2024-12-10 04:14:10.721035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.483 [2024-12-10 04:14:10.721054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.483 [2024-12-10 04:14:10.729299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eee190 00:27:11.483 [2024-12-10 04:14:10.730230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.483 [2024-12-10 04:14:10.730249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.483 [2024-12-10 04:14:10.738273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eeb760 00:27:11.483 [2024-12-10 04:14:10.739220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.483 [2024-12-10 04:14:10.739239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.483 [2024-12-10 04:14:10.747216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef20d8 00:27:11.483 27441.00 IOPS, 107.19 MiB/s [2024-12-10T03:14:10.769Z] [2024-12-10 04:14:10.748065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.483 [2024-12-10 04:14:10.748083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.483 [2024-12-10 04:14:10.756422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee0630 00:27:11.483 [2024-12-10 04:14:10.757178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.483 [2024-12-10 04:14:10.757197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.767069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef92c0 00:27:11.743 [2024-12-10 04:14:10.768848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.768867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.773689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ede8a8 00:27:11.743 [2024-12-10 04:14:10.774422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.774444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.783359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eddc00 00:27:11.743 [2024-12-10 04:14:10.784331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.784351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.792758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee7c50 00:27:11.743 [2024-12-10 04:14:10.793701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.793721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.801942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eebfd0 00:27:11.743 [2024-12-10 04:14:10.802885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.802903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.810379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee1b48 00:27:11.743 [2024-12-10 04:14:10.811319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.811338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.820369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee4de8 00:27:11.743 [2024-12-10 04:14:10.821439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.821458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.829577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eebfd0 00:27:11.743 [2024-12-10 04:14:10.830779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.830799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.836881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eef6a8 00:27:11.743 [2024-12-10 04:14:10.837602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.837621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.845848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef57b0 00:27:11.743 [2024-12-10 04:14:10.846589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.846609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.854774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef7da8 00:27:11.743 [2024-12-10 04:14:10.855520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.855538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.863985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee7818 00:27:11.743 [2024-12-10 04:14:10.864504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.864524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.874727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef96f8 00:27:11.743 [2024-12-10 04:14:10.876139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.876158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.883056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efdeb0 00:27:11.743 [2024-12-10 04:14:10.884119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.884139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.891216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef9f68 00:27:11.743 [2024-12-10 04:14:10.892469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.892488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.898962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef92c0 00:27:11.743 [2024-12-10 04:14:10.899674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.899692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.908933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eea680 00:27:11.743 [2024-12-10 04:14:10.909800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.909818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.917870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee3d08 00:27:11.743 [2024-12-10 04:14:10.918738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.918758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.927933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efb480 00:27:11.743 [2024-12-10 04:14:10.929238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.929256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.936262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef4b08 00:27:11.743 [2024-12-10 04:14:10.937197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.937216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.945083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee8d30 00:27:11.743 [2024-12-10 04:14:10.946054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.946073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.954015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee6fa8 00:27:11.743 [2024-12-10 04:14:10.954954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.954973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.962981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef9f68 00:27:11.743 [2024-12-10 04:14:10.963945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.963963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.971962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef96f8 00:27:11.743 [2024-12-10 04:14:10.972901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.972920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.743 [2024-12-10 04:14:10.981174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eea248 00:27:11.743 [2024-12-10 04:14:10.981914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.743 [2024-12-10 04:14:10.981934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.744 [2024-12-10 04:14:10.991466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef6458 00:27:11.744 [2024-12-10 04:14:10.992985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.744 [2024-12-10 04:14:10.993003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.744 [2024-12-10 04:14:10.997747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef4f40 00:27:11.744 [2024-12-10 04:14:10.998459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.744 [2024-12-10 04:14:10.998479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.744 [2024-12-10 04:14:11.006685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eff3c8 00:27:11.744 [2024-12-10 04:14:11.007500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.744 [2024-12-10 04:14:11.007522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.744 [2024-12-10 04:14:11.016077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee8d30 00:27:11.744 [2024-12-10 04:14:11.016997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.744 [2024-12-10 04:14:11.017015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.026260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef2510 00:27:12.003 [2024-12-10 04:14:11.027367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.027387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.035313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efe720 00:27:12.003 [2024-12-10 04:14:11.036388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.036408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.044459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef3a28 00:27:12.003 [2024-12-10 04:14:11.045523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.045544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.053406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efdeb0 00:27:12.003 [2024-12-10 04:14:11.054474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.054493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.062721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efc128 00:27:12.003 [2024-12-10 04:14:11.063886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.063905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.070081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee6fa8 00:27:12.003 [2024-12-10 04:14:11.070812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.070831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.079033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef9f68 00:27:12.003 [2024-12-10 04:14:11.079765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.079784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.088007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef96f8 00:27:12.003 [2024-12-10 04:14:11.088722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.088741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.097182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eea680 00:27:12.003 [2024-12-10 04:14:11.097676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.097695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.106566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef9b30 00:27:12.003 [2024-12-10 04:14:11.107182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.107202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.115339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efb048 00:27:12.003 [2024-12-10 04:14:11.116222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.116241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.123872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ede8a8 00:27:12.003 [2024-12-10 04:14:11.124501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.124519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.133083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef8618 00:27:12.003 [2024-12-10 04:14:11.133976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.133995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.141945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef0bc0 00:27:12.003 [2024-12-10 04:14:11.142445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.142464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.151275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eee5c8 00:27:12.003 [2024-12-10 04:14:11.151881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.151901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.160667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef6890 00:27:12.003 [2024-12-10 04:14:11.161400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.161419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.169137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee6738 00:27:12.003 [2024-12-10 04:14:11.170420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.170438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.176831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eebfd0 00:27:12.003 [2024-12-10 04:14:11.177535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.003 [2024-12-10 04:14:11.177554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.003 [2024-12-10 04:14:11.185928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ede470 00:27:12.004 [2024-12-10 04:14:11.186650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.004 [2024-12-10 04:14:11.186668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.004 [2024-12-10 04:14:11.196163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee6738 00:27:12.004 [2024-12-10 04:14:11.197128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.004 [2024-12-10 04:14:11.197146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.004 [2024-12-10 04:14:11.206693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016edece0 00:27:12.004 [2024-12-10 04:14:11.208253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.004 [2024-12-10 04:14:11.208272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.004 [2024-12-10 04:14:11.213213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef96f8 00:27:12.004 [2024-12-10 04:14:11.214056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.004 [2024-12-10 04:14:11.214075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.004 [2024-12-10 04:14:11.224204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee8d30 00:27:12.004 [2024-12-10 04:14:11.225519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.004 [2024-12-10 04:14:11.225538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:12.004 [2024-12-10 04:14:11.233565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efd208 00:27:12.004 [2024-12-10 04:14:11.235011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.004 [2024-12-10 04:14:11.235030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.004 [2024-12-10 04:14:11.242871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef96f8 00:27:12.004 [2024-12-10 04:14:11.244417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.004 [2024-12-10 04:14:11.244440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.004 [2024-12-10 04:14:11.249191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee3498 00:27:12.004 [2024-12-10 04:14:11.249983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.004 [2024-12-10 04:14:11.250002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.004 [2024-12-10 04:14:11.259102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee12d8 00:27:12.004 [2024-12-10 04:14:11.260201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.004 [2024-12-10 04:14:11.260220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.004 [2024-12-10 04:14:11.267858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016edece0 00:27:12.004 [2024-12-10 04:14:11.268946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.004 [2024-12-10 04:14:11.268965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.004 [2024-12-10 04:14:11.276152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef6890 00:27:12.004 [2024-12-10 04:14:11.276788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.004 [2024-12-10 04:14:11.276808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.263 [2024-12-10 04:14:11.287395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee4de8 00:27:12.263 [2024-12-10 04:14:11.288904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.263 [2024-12-10 04:14:11.288924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.263 [2024-12-10 04:14:11.294107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee4de8 00:27:12.263 [2024-12-10 04:14:11.294881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.263 [2024-12-10 04:14:11.294900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.263 [2024-12-10 04:14:11.305045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef5378 00:27:12.263 [2024-12-10 04:14:11.306209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.263 [2024-12-10 04:14:11.306229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.263 [2024-12-10 04:14:11.312362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef46d0 00:27:12.263 [2024-12-10 04:14:11.312891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.263 [2024-12-10 04:14:11.312909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.263 [2024-12-10 04:14:11.321737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef96f8 00:27:12.263 [2024-12-10 04:14:11.322627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.263 [2024-12-10 04:14:11.322646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.263 [2024-12-10 04:14:11.330834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef57b0 00:27:12.263 [2024-12-10 04:14:11.331262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.263 [2024-12-10 04:14:11.331282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.263 [2024-12-10 04:14:11.340010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eeee38 00:27:12.263 [2024-12-10 04:14:11.340687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.263 [2024-12-10 04:14:11.340707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.263 [2024-12-10 04:14:11.350014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eeee38 00:27:12.263 [2024-12-10 04:14:11.351237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.263 [2024-12-10 04:14:11.351257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.263 [2024-12-10 04:14:11.359122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eee190 00:27:12.263 [2024-12-10 04:14:11.360331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.360350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.366959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efa3a0 00:27:12.264 [2024-12-10 04:14:11.368176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.368197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.374718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eeaef0 00:27:12.264 [2024-12-10 04:14:11.375344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.375363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.385671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef0788 00:27:12.264 [2024-12-10 04:14:11.386783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.386802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.394724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef2510 00:27:12.264 [2024-12-10 04:14:11.395405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.395428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.403176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eec840 00:27:12.264 [2024-12-10 04:14:11.404406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.404424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.413085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef8618 00:27:12.264 [2024-12-10 04:14:11.414127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.414146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.421901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef8618 00:27:12.264 [2024-12-10 04:14:11.423136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.423155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.431031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee5220 00:27:12.264 [2024-12-10 04:14:11.432294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.432313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.439757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef2d80 00:27:12.264 [2024-12-10 04:14:11.440989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.441008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.449145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee7c50 00:27:12.264 [2024-12-10 04:14:11.450511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.450531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.455643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef0788 00:27:12.264 [2024-12-10 04:14:11.456282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.456303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.466626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016edf118 00:27:12.264 [2024-12-10 04:14:11.467778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.467798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.475763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef9b30 00:27:12.264 [2024-12-10 04:14:11.476462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.476486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.484496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef8618 00:27:12.264 [2024-12-10 04:14:11.485457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.485477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.493576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef2948 00:27:12.264 [2024-12-10 04:14:11.494600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.494620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.504558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee23b8 00:27:12.264 [2024-12-10 04:14:11.506067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.506086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.510845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efb480 00:27:12.264 [2024-12-10 04:14:11.511540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.511559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.519766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efd208 00:27:12.264 [2024-12-10 04:14:11.520571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.520591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.530650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee3d08 00:27:12.264 [2024-12-10 04:14:11.531856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.531876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.264 [2024-12-10 04:14:11.539105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efcdd0 00:27:12.264 [2024-12-10 04:14:11.540275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.264 [2024-12-10 04:14:11.540311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.548773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ede038 00:27:12.524 [2024-12-10 04:14:11.550100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.550120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.557182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016edece0 00:27:12.524 [2024-12-10 04:14:11.558027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.558046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.565800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efb8b8 00:27:12.524 [2024-12-10 04:14:11.566741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.566760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.574895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef2510 00:27:12.524 [2024-12-10 04:14:11.575728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.575747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.583353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016efcdd0 00:27:12.524 [2024-12-10 04:14:11.584171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.584200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.593341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee2c28 00:27:12.524 [2024-12-10 04:14:11.594213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.594232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.602575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee5220 00:27:12.524 [2024-12-10 04:14:11.603312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.603331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.611558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee4140 00:27:12.524 [2024-12-10 04:14:11.612529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.612547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.619938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee01f8 00:27:12.524 [2024-12-10 04:14:11.621007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.621027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.630919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef0350 00:27:12.524 [2024-12-10 04:14:11.632496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.632515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.637422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee4140 00:27:12.524 [2024-12-10 04:14:11.638156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.638179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.646041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef1868 00:27:12.524 [2024-12-10 04:14:11.646661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.646680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.656680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef1868 00:27:12.524 [2024-12-10 04:14:11.657859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.657880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.666130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee4140 00:27:12.524 [2024-12-10 04:14:11.667420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.667440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.675336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee6738 00:27:12.524 [2024-12-10 04:14:11.676631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.676651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.524 [2024-12-10 04:14:11.682951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ede470 00:27:12.524 [2024-12-10 04:14:11.683468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.524 [2024-12-10 04:14:11.683488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.525 [2024-12-10 04:14:11.694392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016eebb98 00:27:12.525 [2024-12-10 04:14:11.695854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.525 [2024-12-10 04:14:11.695874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.525 [2024-12-10 04:14:11.700822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef1868 00:27:12.525 [2024-12-10 04:14:11.701562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.525 [2024-12-10 04:14:11.701581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.525 [2024-12-10 04:14:11.711444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef1868 00:27:12.525 [2024-12-10 04:14:11.712738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.525 [2024-12-10 04:14:11.712760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.525 [2024-12-10 04:14:11.720832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee27f0 00:27:12.525 [2024-12-10 04:14:11.722265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.525 [2024-12-10 04:14:11.722284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.525 [2024-12-10 04:14:11.727292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee1f80 00:27:12.525 [2024-12-10 04:14:11.727905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.525 [2024-12-10 04:14:11.727924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.525 [2024-12-10 04:14:11.736644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ee4578 00:27:12.525 [2024-12-10 04:14:11.737390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.525 [2024-12-10 04:14:11.737410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.525 [2024-12-10 04:14:11.747507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac0410) with pdu=0x200016ef8618 00:27:12.525 [2024-12-10 04:14:11.748765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.525 [2024-12-10 04:14:11.748784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.525 27913.00 IOPS, 109.04 MiB/s 00:27:12.525 Latency(us) 00:27:12.525 [2024-12-10T03:14:11.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.525 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:12.525 nvme0n1 : 2.00 27930.18 109.10 0.00 0.00 4577.23 1778.83 14293.09 00:27:12.525 [2024-12-10T03:14:11.811Z] =================================================================================================================== 00:27:12.525 [2024-12-10T03:14:11.811Z] Total : 27930.18 109.10 0.00 0.00 4577.23 1778.83 14293.09 00:27:12.525 { 00:27:12.525 "results": [ 00:27:12.525 { 00:27:12.525 "job": "nvme0n1", 00:27:12.525 "core_mask": "0x2", 00:27:12.525 "workload": "randwrite", 00:27:12.525 "status": "finished", 00:27:12.525 "queue_depth": 128, 00:27:12.525 "io_size": 4096, 00:27:12.525 "runtime": 2.004534, 00:27:12.525 "iops": 27930.18227677854, 00:27:12.525 "mibps": 109.10227451866618, 00:27:12.525 "io_failed": 0, 00:27:12.525 "io_timeout": 0, 00:27:12.525 "avg_latency_us": 4577.231357415454, 00:27:12.525 "min_latency_us": 1778.8342857142857, 00:27:12.525 "max_latency_us": 14293.089523809524 00:27:12.525 } 00:27:12.525 ], 00:27:12.525 "core_count": 1 00:27:12.525 } 00:27:12.525 04:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:12.525 04:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:12.525 04:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:12.525 04:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:12.525 | .driver_specific 00:27:12.525 | .nvme_error 00:27:12.525 | .status_code 00:27:12.525 | .command_transient_transport_error' 00:27:12.785 04:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:27:12.785 04:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 204856 00:27:12.785 04:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 204856 ']' 00:27:12.785 04:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 204856 00:27:12.785 04:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:12.785 04:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.785 04:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 204856 00:27:12.785 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:12.785 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:12.785 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 204856' 00:27:12.785 killing process with pid 204856 00:27:12.785 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 204856 00:27:12.785 Received shutdown signal, test time was about 2.000000 seconds 00:27:12.785 00:27:12.785 Latency(us) 00:27:12.785 [2024-12-10T03:14:12.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.785 [2024-12-10T03:14:12.071Z] =================================================================================================================== 00:27:12.785 [2024-12-10T03:14:12.071Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.785 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 204856 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=205521 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 205521 /var/tmp/bperf.sock 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 205521 ']' 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:13.044 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:13.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:13.045 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:13.045 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.045 [2024-12-10 04:14:12.218917] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:13.045 [2024-12-10 04:14:12.218964] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205521 ] 00:27:13.045 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:13.045 Zero copy mechanism will not be used. 00:27:13.045 [2024-12-10 04:14:12.292831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.304 [2024-12-10 04:14:12.331505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.304 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:13.304 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:13.304 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:13.304 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:13.562 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:13.562 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.562 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.562 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.562 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.562 04:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.821 nvme0n1 00:27:13.821 04:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:13.821 04:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.821 04:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.821 04:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.821 04:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:13.821 04:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:14.081 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:14.081 Zero copy mechanism will not be used. 00:27:14.081 Running I/O for 2 seconds... 00:27:14.081 [2024-12-10 04:14:13.122478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.081 [2024-12-10 04:14:13.122557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.081 [2024-12-10 04:14:13.122584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.081 [2024-12-10 04:14:13.127545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.081 [2024-12-10 04:14:13.127615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.081 [2024-12-10 04:14:13.127637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.081 [2024-12-10 04:14:13.132179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.081 [2024-12-10 04:14:13.132245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.081 [2024-12-10 04:14:13.132265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.081 [2024-12-10 04:14:13.137232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.081 [2024-12-10 04:14:13.137327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.081 [2024-12-10 04:14:13.137347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.081 [2024-12-10 04:14:13.141817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.081 [2024-12-10 04:14:13.141880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.081 [2024-12-10 04:14:13.141899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.081 [2024-12-10 04:14:13.146402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.081 [2024-12-10 04:14:13.146461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.081 [2024-12-10 04:14:13.146480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.081 [2024-12-10 04:14:13.150957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.081 [2024-12-10 04:14:13.151020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.081 [2024-12-10 04:14:13.151039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.081 [2024-12-10 04:14:13.155392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.081 [2024-12-10 04:14:13.155444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.081 [2024-12-10 04:14:13.155463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.081 [2024-12-10 04:14:13.159800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.081 [2024-12-10 04:14:13.159895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.081 [2024-12-10 04:14:13.159913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.081 [2024-12-10 04:14:13.164267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.081 [2024-12-10 04:14:13.164322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.081 [2024-12-10 04:14:13.164341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.081 [2024-12-10 04:14:13.168705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.081 [2024-12-10 04:14:13.168765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.081 [2024-12-10 04:14:13.168783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.081 [2024-12-10 04:14:13.173141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.173205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.173224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.177614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.177673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.177691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.181995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.182063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.182081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.186397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.186458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.186477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.190878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.190938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.190956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.195411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.195469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.195487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.199942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.200073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.200092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.205387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.205559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.205577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.211300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.211446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.211465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.217522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.217722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.217744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.224025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.224213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.224231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.230508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.230698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.230720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.237115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.237292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.237313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.243519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.243705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.243724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.250086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.250239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.250258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.257084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.257214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.257232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.264059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.264156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.264181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.271296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.271497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.271515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.277386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.277509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.277528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.283151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.283279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.283297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.289258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.289360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.289379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.294211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.294293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.294312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.299226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.299291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.299309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.303873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.303941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.303960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.308164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.308236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.308254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.312484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.312579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.312597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.316869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.316922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.316940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.082 [2024-12-10 04:14:13.321159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.082 [2024-12-10 04:14:13.321227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.082 [2024-12-10 04:14:13.321245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.083 [2024-12-10 04:14:13.325448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.083 [2024-12-10 04:14:13.325506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.083 [2024-12-10 04:14:13.325525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.083 [2024-12-10 04:14:13.329698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.083 [2024-12-10 04:14:13.329750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.083 [2024-12-10 04:14:13.329768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.083 [2024-12-10 04:14:13.333995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.083 [2024-12-10 04:14:13.334057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.083 [2024-12-10 04:14:13.334075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.083 [2024-12-10 04:14:13.338237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.083 [2024-12-10 04:14:13.338298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.083 [2024-12-10 04:14:13.338316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.083 [2024-12-10 04:14:13.342607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.083 [2024-12-10 04:14:13.342701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.083 [2024-12-10 04:14:13.342719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.083 [2024-12-10 04:14:13.346902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.083 [2024-12-10 04:14:13.346966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.083 [2024-12-10 04:14:13.346984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.083 [2024-12-10 04:14:13.351268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.083 [2024-12-10 04:14:13.351328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.083 [2024-12-10 04:14:13.351346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.083 [2024-12-10 04:14:13.355510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.083 [2024-12-10 04:14:13.355575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.083 [2024-12-10 04:14:13.355595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.083 [2024-12-10 04:14:13.359813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.083 [2024-12-10 04:14:13.359870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.083 [2024-12-10 04:14:13.359889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.343 [2024-12-10 04:14:13.364110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.343 [2024-12-10 04:14:13.364162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.343 [2024-12-10 04:14:13.364189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.343 [2024-12-10 04:14:13.368396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.343 [2024-12-10 04:14:13.368456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.368475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.372677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.372733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.372752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.377015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.377068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.377087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.381304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.381368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.381387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.385627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.385690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.385708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.389846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.389904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.389923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.394360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.394424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.394442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.398929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.398981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.398999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.404120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.404183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.404201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.409123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.409194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.409213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.413366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.413593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.413614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.418367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.418653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.418673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.423054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.423318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.423338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.427342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.427584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.427604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.431547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.431800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.431820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.435753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.435991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.436011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.439904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.440164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.440192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.444085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.444339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.444358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.448241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.448486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.448506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.452396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.452651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.452672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.456576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.456829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.456849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.460735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.460989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.461009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.465164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.465426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.465446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.469838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.470096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.470119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.475085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.475331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.475351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.480148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.480384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.480408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.484965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.485202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.485222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.489854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.344 [2024-12-10 04:14:13.490080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.344 [2024-12-10 04:14:13.490099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.344 [2024-12-10 04:14:13.494822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.494960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.494978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.499773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.500003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.500023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.504193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.504426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.504446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.508587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.508818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.508837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.512752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.512978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.512999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.517129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.517369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.517390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.521547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.521793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.521812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.525992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.526280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.526300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.530294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.530500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.530518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.534130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.534347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.534368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.538033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.538245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.538263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.541901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.542099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.542116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.545782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.545987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.546014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.549691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.549903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.549923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.553775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.553962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.553979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.558691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.558874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.558892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.563062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.563254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.563274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.567114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.567296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.567314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.571144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.571355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.571375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.574974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.575178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.575196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.578852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.579053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.579071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.582761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.582957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.582977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.586633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.586828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.586848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.590497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.590682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.590700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.594689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.594886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.594905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.598570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.598761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.598780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.602556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.602745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.602764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.606339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.345 [2024-12-10 04:14:13.606554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.345 [2024-12-10 04:14:13.606572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.345 [2024-12-10 04:14:13.610112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.346 [2024-12-10 04:14:13.610317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.346 [2024-12-10 04:14:13.610335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.346 [2024-12-10 04:14:13.614146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.346 [2024-12-10 04:14:13.614345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.346 [2024-12-10 04:14:13.614363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.346 [2024-12-10 04:14:13.618814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.346 [2024-12-10 04:14:13.618998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.346 [2024-12-10 04:14:13.619016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.346 [2024-12-10 04:14:13.623023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.346 [2024-12-10 04:14:13.623209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.346 [2024-12-10 04:14:13.623228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.627451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.627638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.627657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.632119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.632286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.632305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.636762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.636962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.636980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.640805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.640989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.641007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.644616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.644804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.644822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.648398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.648589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.648607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.652180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.652358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.652376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.655934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.656121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.656139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.659735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.659926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.659944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.663492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.663680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.663700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.667240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.667426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.667444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.671110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.671326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.671346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.675648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.675837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.675855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.680323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.680500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.680519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.684353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.684548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.684568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.688355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.688546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.688568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.692297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.692485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.692504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.696443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.696611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.696630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.700955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.701178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.701198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.706177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.706373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.706391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.711876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.712046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.712064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.716206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.716445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.716464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.720488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.720693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.720713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.724681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.724873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.724892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.728811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.729009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.729027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.732909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.607 [2024-12-10 04:14:13.733141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.607 [2024-12-10 04:14:13.733162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.607 [2024-12-10 04:14:13.737135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.737373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.737394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.741156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.741351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.741369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.745185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.745364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.745384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.749041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.749233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.749251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.753008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.753196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.753214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.756974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.757206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.757224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.760934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.761120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.761138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.764875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.765073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.765092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.768993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.769179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.769198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.772981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.773213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.773234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.776974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.777217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.777237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.780965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.781200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.781220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.784925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.785113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.785131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.789080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.789286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.789305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.793882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.794079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.794098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.797807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.798018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.798041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.802426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.802659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.802678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.808000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.808212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.808230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.812971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.813197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.813217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.818019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.818238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.818256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.823782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.824059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.824079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.830536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.830752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.830772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.835418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.835614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.835632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.839878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.840089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.840108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.844073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.844299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.844318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.848187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.848363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.848382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.851944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.852144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.852162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.855726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.608 [2024-12-10 04:14:13.855913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.608 [2024-12-10 04:14:13.855931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.608 [2024-12-10 04:14:13.859527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.609 [2024-12-10 04:14:13.859698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.609 [2024-12-10 04:14:13.859718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.609 [2024-12-10 04:14:13.863229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.609 [2024-12-10 04:14:13.863412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.609 [2024-12-10 04:14:13.863431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.609 [2024-12-10 04:14:13.866949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.609 [2024-12-10 04:14:13.867133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.609 [2024-12-10 04:14:13.867150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.609 [2024-12-10 04:14:13.870690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.609 [2024-12-10 04:14:13.870875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.609 [2024-12-10 04:14:13.870893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.609 [2024-12-10 04:14:13.874479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.609 [2024-12-10 04:14:13.874657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.609 [2024-12-10 04:14:13.874676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.609 [2024-12-10 04:14:13.878274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.609 [2024-12-10 04:14:13.878477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.609 [2024-12-10 04:14:13.878496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.609 [2024-12-10 04:14:13.882049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.609 [2024-12-10 04:14:13.882241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.609 [2024-12-10 04:14:13.882259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.609 [2024-12-10 04:14:13.885842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.609 [2024-12-10 04:14:13.886026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.609 [2024-12-10 04:14:13.886044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.869 [2024-12-10 04:14:13.889610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.869 [2024-12-10 04:14:13.889813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.869 [2024-12-10 04:14:13.889831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.869 [2024-12-10 04:14:13.893445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.869 [2024-12-10 04:14:13.893640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.869 [2024-12-10 04:14:13.893661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.869 [2024-12-10 04:14:13.897187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.869 [2024-12-10 04:14:13.897363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.869 [2024-12-10 04:14:13.897382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.869 [2024-12-10 04:14:13.900889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.869 [2024-12-10 04:14:13.901074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.869 [2024-12-10 04:14:13.901092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.869 [2024-12-10 04:14:13.904614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.904819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.904836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.908332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.908510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.908533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.912040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.912242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.912260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.915757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.915943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.915961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.919675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.919856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.919876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.924052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.924200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.924217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.929044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.929206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.929225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.933089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.933257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.933275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.937096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.937267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.937285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.941016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.941144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.941162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.944968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.945148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.945171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.948903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.949065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.949083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.952727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.952885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.952903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.956640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.956785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.956803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.960632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.960791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.960811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.964835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.965002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.965020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.968791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.968949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.968967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.972779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.972946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.972964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.976701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.976870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.976890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.980642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.980795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.980813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.984615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.984782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.984801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.988483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.988639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.988659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.992417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.992584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.992604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:13.996346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:13.996500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:13.996518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:14.000282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:14.000434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:14.000452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:14.004249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:14.004426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:14.004446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:14.008372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:14.008541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:14.008560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:14.012261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.870 [2024-12-10 04:14:14.012419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.870 [2024-12-10 04:14:14.012440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.870 [2024-12-10 04:14:14.016183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.016359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.016376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.020138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.020304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.020321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.024081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.024254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.024272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.028006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.028183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.028201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.031915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.032088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.032105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.035812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.035972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.035990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.039703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.039856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.039874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.043643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.043792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.043810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.047520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.047668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.047686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.051434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.051590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.051609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.055290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.055425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.055443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.059144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.059310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.059328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.063011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.063185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.063203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.066738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.066883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.066901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.070720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.070879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.070898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.075738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.075872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.075890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.080356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.080501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.080519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.084409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.084568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.084595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.088413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.088554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.088571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.092539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.092690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.092709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.096543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.096699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.096717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.100541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.100688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.100706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.104509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.104663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.104680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.108450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.108603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.108621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.112458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.112611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.112629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.116417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.116544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.116565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.871 [2024-12-10 04:14:14.120324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.120475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.871 [2024-12-10 04:14:14.120493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.871 7119.00 IOPS, 889.88 MiB/s [2024-12-10T03:14:14.157Z] [2024-12-10 04:14:14.125269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.871 [2024-12-10 04:14:14.125407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.872 [2024-12-10 04:14:14.125425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.872 [2024-12-10 04:14:14.129173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.872 [2024-12-10 04:14:14.129350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.872 [2024-12-10 04:14:14.129370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.872 [2024-12-10 04:14:14.132984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.872 [2024-12-10 04:14:14.133142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.872 [2024-12-10 04:14:14.133163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.872 [2024-12-10 04:14:14.136759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.872 [2024-12-10 04:14:14.136915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.872 [2024-12-10 04:14:14.136933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.872 [2024-12-10 04:14:14.140518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.872 [2024-12-10 04:14:14.140682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.872 [2024-12-10 04:14:14.140701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.872 [2024-12-10 04:14:14.144271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.872 [2024-12-10 04:14:14.144438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.872 [2024-12-10 04:14:14.144457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.872 [2024-12-10 04:14:14.148051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:14.872 [2024-12-10 04:14:14.148231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.872 [2024-12-10 04:14:14.148248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.132 [2024-12-10 04:14:14.151802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.132 [2024-12-10 04:14:14.151963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.132 [2024-12-10 04:14:14.151982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.132 [2024-12-10 04:14:14.155541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.132 [2024-12-10 04:14:14.155716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.132 [2024-12-10 04:14:14.155736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.132 [2024-12-10 04:14:14.159319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.132 [2024-12-10 04:14:14.159494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.132 [2024-12-10 04:14:14.159512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.132 [2024-12-10 04:14:14.163097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.132 [2024-12-10 04:14:14.163310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.132 [2024-12-10 04:14:14.163329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.132 [2024-12-10 04:14:14.166913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.132 [2024-12-10 04:14:14.167089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.132 [2024-12-10 04:14:14.167107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.132 [2024-12-10 04:14:14.170614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.132 [2024-12-10 04:14:14.170771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.132 [2024-12-10 04:14:14.170789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.132 [2024-12-10 04:14:14.174335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.132 [2024-12-10 04:14:14.174495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.132 [2024-12-10 04:14:14.174514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.132 [2024-12-10 04:14:14.178021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.132 [2024-12-10 04:14:14.178194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.132 [2024-12-10 04:14:14.178212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.132 [2024-12-10 04:14:14.181691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.132 [2024-12-10 04:14:14.181855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.132 [2024-12-10 04:14:14.181873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.132 [2024-12-10 04:14:14.185390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.132 [2024-12-10 04:14:14.185553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.132 [2024-12-10 04:14:14.185572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.132 [2024-12-10 04:14:14.189084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.189267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.189287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.192786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.192940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.192958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.196481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.196644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.196662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.200144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.200319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.200337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.203837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.204010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.204027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.207531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.207699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.207718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.211254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.211411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.211429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.215015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.215187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.215211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.218713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.218880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.218898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.222422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.222583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.222601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.226083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.226246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.226264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.229787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.229946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.229966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.233469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.233646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.233667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.237138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.237309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.237328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.240823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.240983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.241001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.244502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.244661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.244681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.248185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.248370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.248389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.251858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.252024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.252042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.255562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.255722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.255740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.259216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.259381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.259400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.262938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.263076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.263094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.266894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.267033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.267051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.271514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.271642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.271676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.275960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.276108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.276126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.279806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.279971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.279989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.283772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.283936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.283955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.287650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.287813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.287831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.133 [2024-12-10 04:14:14.291653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.133 [2024-12-10 04:14:14.291797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.133 [2024-12-10 04:14:14.291815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.295451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.295615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.295633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.299396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.299558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.299578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.303452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.303609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.303629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.308110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.308266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.308284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.312351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.312493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.312511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.316283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.316440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.316463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.320114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.320311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.320329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.323981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.324136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.324153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.327900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.328050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.328068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.331716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.331867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.331884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.335719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.335888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.335905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.339945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.340089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.340107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.344916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.345080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.345098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.349067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.349214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.349232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.353025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.353187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.353205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.357033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.357183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.357202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.361089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.361254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.361273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.365113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.365278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.365296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.369123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.369282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.369300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.373029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.373184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.373202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.376984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.377146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.377164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.381183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.381341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.381359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.386015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.386175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.386193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.390065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.390224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.390243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.393978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.394126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.394145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.398009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.398173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.398191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.402001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.402164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.134 [2024-12-10 04:14:14.402189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.134 [2024-12-10 04:14:14.405969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.134 [2024-12-10 04:14:14.406117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.135 [2024-12-10 04:14:14.406135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.135 [2024-12-10 04:14:14.409982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.135 [2024-12-10 04:14:14.410142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.135 [2024-12-10 04:14:14.410160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.413925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.414096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.414115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.417808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.417967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.417987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.421902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.422047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.422069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.427037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.427183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.427202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.431254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.431401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.431419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.435228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.435385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.435405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.439211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.439379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.439399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.443065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.443238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.443256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.447453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.447594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.447614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.453086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.453280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.453299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.459395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.459574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.459593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.465616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.465747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.465766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.472041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.472208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.472227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.476648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.476826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.476845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.480968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.481114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.481133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.485270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.485404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.485423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.489529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.395 [2024-12-10 04:14:14.489706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.395 [2024-12-10 04:14:14.489726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.395 [2024-12-10 04:14:14.493695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.493894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.493912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.497853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.498011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.498028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.501810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.501972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.501991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.505843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.506057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.506077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.511358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.511614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.511634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.515297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.515491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.515509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.519335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.519495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.519515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.523346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.523547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.523567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.527485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.527639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.527657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.531831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.532043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.532061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.537247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.537416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.537435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.541679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.541858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.541881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.545834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.545975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.545993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.549907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.550092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.550110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.553952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.554132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.554149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.557919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.558064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.558083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.562153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.562403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.562423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.567234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.567612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.567633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.571866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.572041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.572060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.576901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.577120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.577141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.582193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.582374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.582393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.587347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.587583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.587604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.592631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.592777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.592795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.597724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.597868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.597886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.603060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.603253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.603271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.608239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.396 [2024-12-10 04:14:14.608357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.396 [2024-12-10 04:14:14.608375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.396 [2024-12-10 04:14:14.612777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.612893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.612911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.616744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.616849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.616867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.620818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.620941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.620960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.624910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.625016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.625034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.629720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.629881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.629900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.635325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.635567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.635588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.640465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.640600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.640618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.644703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.644816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.644835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.648648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.648768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.648786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.652755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.652874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.652892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.657088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.657227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.657246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.661175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.661333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.661355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.665160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.665308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.665327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.669268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.669432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.669450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.397 [2024-12-10 04:14:14.673467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.397 [2024-12-10 04:14:14.673627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.397 [2024-12-10 04:14:14.673646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.678182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.678328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.678346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.682607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.682736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.682755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.688301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.688433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.688452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.694869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.694998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.695018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.701414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.701531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.701551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.706376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.706603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.706624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.710966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.711072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.711090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.716078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.716225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.716245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.721464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.721555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.721573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.725859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.725955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.725974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.730628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.730742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.730760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.735110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.735207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.735225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.739493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.739574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.739592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.744332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.744434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.744453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.748848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.748957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.748975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.753229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.753339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.753358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.758226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.758341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.758358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.762729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.762840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.762858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.767137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.767239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.767258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.771659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.771770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.771789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.776019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.776129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.776148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.780860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.658 [2024-12-10 04:14:14.780972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.658 [2024-12-10 04:14:14.780991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.658 [2024-12-10 04:14:14.785734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.785836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.785857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.790242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.790337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.790355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.794824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.794938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.794956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.799314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.799570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.799590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.803823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.803930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.803949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.808480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.808595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.808613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.812841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.812946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.812964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.817041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.817138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.817157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.820929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.821052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.821069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.824767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.824884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.824903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.828629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.828722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.828740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.832645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.832748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.832766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.836558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.836694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.836712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.840492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.840628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.840646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.844274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.844395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.844413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.848163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.848301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.848318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.852090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.852195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.852213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.855946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.856052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.856071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.859804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.859924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.859942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.863633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.863748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.863766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.867306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.867432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.867450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.871101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.871241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.871258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.875347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.875636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.875655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.880009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.880135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.880153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.885094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.885164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.885188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.890308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.890468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.890488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.895102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.895226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.895249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.899725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.659 [2024-12-10 04:14:14.899843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.659 [2024-12-10 04:14:14.899861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.659 [2024-12-10 04:14:14.904095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.660 [2024-12-10 04:14:14.904206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.660 [2024-12-10 04:14:14.904224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.660 [2024-12-10 04:14:14.908254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.660 [2024-12-10 04:14:14.908360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.660 [2024-12-10 04:14:14.908378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.660 [2024-12-10 04:14:14.913135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.660 [2024-12-10 04:14:14.913225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.660 [2024-12-10 04:14:14.913244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.660 [2024-12-10 04:14:14.917358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.660 [2024-12-10 04:14:14.917458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.660 [2024-12-10 04:14:14.917476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.660 [2024-12-10 04:14:14.922345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.660 [2024-12-10 04:14:14.922462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.660 [2024-12-10 04:14:14.922481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.660 [2024-12-10 04:14:14.927016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.660 [2024-12-10 04:14:14.927118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.660 [2024-12-10 04:14:14.927136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.660 [2024-12-10 04:14:14.931867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.660 [2024-12-10 04:14:14.931976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.660 [2024-12-10 04:14:14.931994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.660 [2024-12-10 04:14:14.936346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.660 [2024-12-10 04:14:14.936484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.660 [2024-12-10 04:14:14.936503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.941065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.941176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.941194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.945510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.945629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.945647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.949859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.949938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.949957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.954516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.954632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.954650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.958969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.959055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.959074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.963804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.963915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.963933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.968296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.968408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.968426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.972889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.972978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.972997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.977657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.977767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.977785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.981629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.981754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.981772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.985674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.985784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.985803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.990279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.990377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.990396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.994685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.920 [2024-12-10 04:14:14.994793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.920 [2024-12-10 04:14:14.994811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.920 [2024-12-10 04:14:14.998941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:14.999069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:14.999087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.003262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.003359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.003378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.007123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.007247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.007265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.010895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.011025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.011046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.014708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.014825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.014843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.018661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.018781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.018799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.022590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.022690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.022708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.026506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.026633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.026651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.030391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.030519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.030537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.034183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.034283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.034301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.038155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.038249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.038267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.043025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.043125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.043143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.047101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.047223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.047242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.051286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.051405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.051423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.055127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.055293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.055310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.058915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.059024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.059041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.062779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.062894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.062912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.066856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.066985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.067003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.071560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.071684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.071702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.075586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.075692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.075711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.079549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.079652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.079670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.083386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.083516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.083535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.087150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.087291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.087309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.091028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.091157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.091181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.095555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.095653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.095671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.100210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.100344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.100362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.104140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.104258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.104276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.108295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.921 [2024-12-10 04:14:15.108424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.921 [2024-12-10 04:14:15.108441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.921 [2024-12-10 04:14:15.112063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.922 [2024-12-10 04:14:15.112178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.922 [2024-12-10 04:14:15.112195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.922 [2024-12-10 04:14:15.115951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.922 [2024-12-10 04:14:15.116088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.922 [2024-12-10 04:14:15.116109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.922 [2024-12-10 04:14:15.120259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.922 [2024-12-10 04:14:15.120351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.922 [2024-12-10 04:14:15.120369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.922 [2024-12-10 04:14:15.124960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ac08f0) with pdu=0x200016eff3c8 00:27:15.922 [2024-12-10 04:14:15.125017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.922 [2024-12-10 04:14:15.125035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.922 7193.00 IOPS, 899.12 MiB/s 00:27:15.922 Latency(us) 00:27:15.922 [2024-12-10T03:14:15.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.922 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:15.922 nvme0n1 : 2.00 7189.85 898.73 0.00 0.00 2221.32 1599.39 7052.92 00:27:15.922 [2024-12-10T03:14:15.208Z] =================================================================================================================== 00:27:15.922 [2024-12-10T03:14:15.208Z] Total : 7189.85 898.73 0.00 0.00 2221.32 1599.39 7052.92 00:27:15.922 { 00:27:15.922 "results": [ 00:27:15.922 { 00:27:15.922 "job": "nvme0n1", 00:27:15.922 "core_mask": "0x2", 00:27:15.922 "workload": "randwrite", 00:27:15.922 "status": "finished", 00:27:15.922 "queue_depth": 16, 00:27:15.922 "io_size": 131072, 00:27:15.922 "runtime": 2.003798, 00:27:15.922 "iops": 7189.846481531572, 00:27:15.922 "mibps": 898.7308101914465, 00:27:15.922 "io_failed": 0, 00:27:15.922 "io_timeout": 0, 00:27:15.922 "avg_latency_us": 2221.3174162031023, 00:27:15.922 "min_latency_us": 1599.3904761904762, 00:27:15.922 "max_latency_us": 7052.921904761904 00:27:15.922 } 00:27:15.922 ], 00:27:15.922 "core_count": 1 00:27:15.922 } 00:27:15.922 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:15.922 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:15.922 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:15.922 | .driver_specific 00:27:15.922 | .nvme_error 00:27:15.922 | .status_code 00:27:15.922 | .command_transient_transport_error' 00:27:15.922 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 465 > 0 )) 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 205521 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 205521 ']' 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 205521 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 205521 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 205521' 00:27:16.181 killing process with pid 205521 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 205521 00:27:16.181 Received shutdown signal, test time was about 2.000000 seconds 00:27:16.181 00:27:16.181 Latency(us) 00:27:16.181 [2024-12-10T03:14:15.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.181 [2024-12-10T03:14:15.467Z] =================================================================================================================== 00:27:16.181 [2024-12-10T03:14:15.467Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:16.181 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 205521 00:27:16.440 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 203765 00:27:16.440 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 203765 ']' 00:27:16.440 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 203765 00:27:16.440 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:16.440 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:16.440 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203765 00:27:16.440 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:16.440 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:16.440 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203765' 00:27:16.440 killing process with pid 203765 00:27:16.440 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 203765 00:27:16.440 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 203765 00:27:16.699 00:27:16.699 real 0m13.952s 00:27:16.699 user 0m26.513s 00:27:16.699 sys 0m4.771s 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.699 ************************************ 00:27:16.699 END TEST nvmf_digest_error 00:27:16.699 ************************************ 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:16.699 rmmod nvme_tcp 00:27:16.699 rmmod nvme_fabrics 00:27:16.699 rmmod nvme_keyring 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 203765 ']' 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 203765 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 203765 ']' 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 203765 00:27:16.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (203765) - No such process 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 203765 is not found' 00:27:16.699 Process with pid 203765 is not found 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.699 04:14:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.236 04:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:19.236 00:27:19.236 real 0m36.072s 00:27:19.236 user 0m54.542s 00:27:19.236 sys 0m13.944s 00:27:19.236 04:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.236 04:14:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.236 ************************************ 00:27:19.236 END TEST nvmf_digest 00:27:19.236 ************************************ 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.236 ************************************ 00:27:19.236 START TEST nvmf_bdevperf 00:27:19.236 ************************************ 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:19.236 * Looking for test storage... 00:27:19.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:19.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.236 --rc genhtml_branch_coverage=1 00:27:19.236 --rc genhtml_function_coverage=1 00:27:19.236 --rc genhtml_legend=1 00:27:19.236 --rc geninfo_all_blocks=1 00:27:19.236 --rc geninfo_unexecuted_blocks=1 00:27:19.236 00:27:19.236 ' 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:19.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.236 --rc genhtml_branch_coverage=1 00:27:19.236 --rc genhtml_function_coverage=1 00:27:19.236 --rc genhtml_legend=1 00:27:19.236 --rc geninfo_all_blocks=1 00:27:19.236 --rc geninfo_unexecuted_blocks=1 00:27:19.236 00:27:19.236 ' 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:19.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.236 --rc genhtml_branch_coverage=1 00:27:19.236 --rc genhtml_function_coverage=1 00:27:19.236 --rc genhtml_legend=1 00:27:19.236 --rc geninfo_all_blocks=1 00:27:19.236 --rc geninfo_unexecuted_blocks=1 00:27:19.236 00:27:19.236 ' 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:19.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.236 --rc genhtml_branch_coverage=1 00:27:19.236 --rc genhtml_function_coverage=1 00:27:19.236 --rc genhtml_legend=1 00:27:19.236 --rc geninfo_all_blocks=1 00:27:19.236 --rc geninfo_unexecuted_blocks=1 00:27:19.236 00:27:19.236 ' 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.236 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:19.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:19.237 04:14:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.807 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.807 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:25.808 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:25.808 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:25.808 Found net devices under 0000:af:00.0: cvl_0_0 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:25.808 Found net devices under 0000:af:00.1: cvl_0_1 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.808 04:14:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:27:25.808 00:27:25.808 --- 10.0.0.2 ping statistics --- 00:27:25.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.808 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:27:25.808 00:27:25.808 --- 10.0.0.1 ping statistics --- 00:27:25.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.808 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:25.808 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=209467 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 209467 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 209467 ']' 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.809 [2024-12-10 04:14:24.186748] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:25.809 [2024-12-10 04:14:24.186792] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.809 [2024-12-10 04:14:24.262505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.809 [2024-12-10 04:14:24.303608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.809 [2024-12-10 04:14:24.303642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.809 [2024-12-10 04:14:24.303649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.809 [2024-12-10 04:14:24.303655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.809 [2024-12-10 04:14:24.303660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.809 [2024-12-10 04:14:24.304980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.809 [2024-12-10 04:14:24.305087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.809 [2024-12-10 04:14:24.305089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.809 [2024-12-10 04:14:24.440810] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.809 Malloc0 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.809 [2024-12-10 04:14:24.495906] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.809 { 00:27:25.809 "params": { 00:27:25.809 "name": "Nvme$subsystem", 00:27:25.809 "trtype": "$TEST_TRANSPORT", 00:27:25.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.809 "adrfam": "ipv4", 00:27:25.809 "trsvcid": "$NVMF_PORT", 00:27:25.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.809 "hdgst": ${hdgst:-false}, 00:27:25.809 "ddgst": ${ddgst:-false} 00:27:25.809 }, 00:27:25.809 "method": "bdev_nvme_attach_controller" 00:27:25.809 } 00:27:25.809 EOF 00:27:25.809 )") 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:25.809 04:14:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:25.809 "params": { 00:27:25.809 "name": "Nvme1", 00:27:25.809 "trtype": "tcp", 00:27:25.809 "traddr": "10.0.0.2", 00:27:25.809 "adrfam": "ipv4", 00:27:25.809 "trsvcid": "4420", 00:27:25.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:25.809 "hdgst": false, 00:27:25.809 "ddgst": false 00:27:25.809 }, 00:27:25.809 "method": "bdev_nvme_attach_controller" 00:27:25.809 }' 00:27:25.809 [2024-12-10 04:14:24.547192] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:25.809 [2024-12-10 04:14:24.547245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid209522 ] 00:27:25.809 [2024-12-10 04:14:24.625234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.809 [2024-12-10 04:14:24.664961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.809 Running I/O for 1 seconds... 00:27:26.746 11361.00 IOPS, 44.38 MiB/s 00:27:26.746 Latency(us) 00:27:26.746 [2024-12-10T03:14:26.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.746 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:26.746 Verification LBA range: start 0x0 length 0x4000 00:27:26.746 Nvme1n1 : 1.01 11379.37 44.45 0.00 0.00 11207.73 2371.78 12732.71 00:27:26.746 [2024-12-10T03:14:26.032Z] =================================================================================================================== 00:27:26.746 [2024-12-10T03:14:26.032Z] Total : 11379.37 44.45 0.00 0.00 11207.73 2371.78 12732.71 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=209886 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.004 { 00:27:27.004 "params": { 00:27:27.004 "name": "Nvme$subsystem", 00:27:27.004 "trtype": "$TEST_TRANSPORT", 00:27:27.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.004 "adrfam": "ipv4", 00:27:27.004 "trsvcid": "$NVMF_PORT", 00:27:27.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.004 "hdgst": ${hdgst:-false}, 00:27:27.004 "ddgst": ${ddgst:-false} 00:27:27.004 }, 00:27:27.004 "method": "bdev_nvme_attach_controller" 00:27:27.004 } 00:27:27.004 EOF 00:27:27.004 )") 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:27.004 04:14:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:27.004 "params": { 00:27:27.004 "name": "Nvme1", 00:27:27.004 "trtype": "tcp", 00:27:27.004 "traddr": "10.0.0.2", 00:27:27.004 "adrfam": "ipv4", 00:27:27.004 "trsvcid": "4420", 00:27:27.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:27.004 "hdgst": false, 00:27:27.004 "ddgst": false 00:27:27.004 }, 00:27:27.004 "method": "bdev_nvme_attach_controller" 00:27:27.004 }' 00:27:27.004 [2024-12-10 04:14:26.199228] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:27.004 [2024-12-10 04:14:26.199278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid209886 ] 00:27:27.004 [2024-12-10 04:14:26.272465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.263 [2024-12-10 04:14:26.313179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.263 Running I/O for 15 seconds... 00:27:29.577 11232.00 IOPS, 43.88 MiB/s [2024-12-10T03:14:29.434Z] 11357.00 IOPS, 44.36 MiB/s [2024-12-10T03:14:29.434Z] 04:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 209467 00:27:30.148 04:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:30.148 [2024-12-10 04:14:29.169792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.148 [2024-12-10 04:14:29.169831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.148 [2024-12-10 04:14:29.169849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.148 [2024-12-10 04:14:29.169857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.148 [2024-12-10 04:14:29.169866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.148 [2024-12-10 04:14:29.169873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.148 [2024-12-10 04:14:29.169883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.148 [2024-12-10 04:14:29.169890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.148 [2024-12-10 04:14:29.169899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.148 [2024-12-10 04:14:29.169906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.148 [2024-12-10 04:14:29.169918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.148 [2024-12-10 04:14:29.169926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.148 [2024-12-10 04:14:29.169935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.148 [2024-12-10 04:14:29.169942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.148 [2024-12-10 04:14:29.169951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.148 [2024-12-10 04:14:29.169958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.148 [2024-12-10 04:14:29.169969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.169976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.169987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.169994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.149 [2024-12-10 04:14:29.170371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.149 [2024-12-10 04:14:29.170596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.149 [2024-12-10 04:14:29.170604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.150 [2024-12-10 04:14:29.170978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.170986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.170994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.150 [2024-12-10 04:14:29.171291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.150 [2024-12-10 04:14:29.171298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.151 [2024-12-10 04:14:29.171444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.151 [2024-12-10 04:14:29.171824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.151 [2024-12-10 04:14:29.171830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.152 [2024-12-10 04:14:29.171838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.152 [2024-12-10 04:14:29.171844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.152 [2024-12-10 04:14:29.171851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.152 [2024-12-10 04:14:29.171861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.152 [2024-12-10 04:14:29.171870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.152 [2024-12-10 04:14:29.171876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.152 [2024-12-10 04:14:29.171884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.152 [2024-12-10 04:14:29.171890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.152 [2024-12-10 04:14:29.171897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f510 is same with the state(6) to be set 00:27:30.152 [2024-12-10 04:14:29.171906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:30.152 [2024-12-10 04:14:29.171913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:30.152 [2024-12-10 04:14:29.171920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110544 len:8 PRP1 0x0 PRP2 0x0 00:27:30.152 [2024-12-10 04:14:29.171928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.152 [2024-12-10 04:14:29.174774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.152 [2024-12-10 04:14:29.174827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.152 [2024-12-10 04:14:29.175354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.152 [2024-12-10 04:14:29.175372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.152 [2024-12-10 04:14:29.175379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.152 [2024-12-10 04:14:29.175554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.152 [2024-12-10 04:14:29.175728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.152 [2024-12-10 04:14:29.175737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.152 [2024-12-10 04:14:29.175746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.152 [2024-12-10 04:14:29.175755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.152 [2024-12-10 04:14:29.187863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.152 [2024-12-10 04:14:29.188284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.152 [2024-12-10 04:14:29.188303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.152 [2024-12-10 04:14:29.188311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.152 [2024-12-10 04:14:29.188471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.152 [2024-12-10 04:14:29.188632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.152 [2024-12-10 04:14:29.188642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.152 [2024-12-10 04:14:29.188648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.152 [2024-12-10 04:14:29.188655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.152 [2024-12-10 04:14:29.200603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.152 [2024-12-10 04:14:29.200956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.152 [2024-12-10 04:14:29.200973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.152 [2024-12-10 04:14:29.200981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.152 [2024-12-10 04:14:29.201141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.152 [2024-12-10 04:14:29.201331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.152 [2024-12-10 04:14:29.201342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.152 [2024-12-10 04:14:29.201349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.152 [2024-12-10 04:14:29.201356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.152 [2024-12-10 04:14:29.213492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.152 [2024-12-10 04:14:29.213922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.152 [2024-12-10 04:14:29.213940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.152 [2024-12-10 04:14:29.213959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.152 [2024-12-10 04:14:29.214120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.152 [2024-12-10 04:14:29.214308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.152 [2024-12-10 04:14:29.214318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.152 [2024-12-10 04:14:29.214325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.152 [2024-12-10 04:14:29.214332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.152 [2024-12-10 04:14:29.226274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.152 [2024-12-10 04:14:29.226665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.152 [2024-12-10 04:14:29.226682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.152 [2024-12-10 04:14:29.226689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.152 [2024-12-10 04:14:29.226848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.152 [2024-12-10 04:14:29.227009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.152 [2024-12-10 04:14:29.227018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.152 [2024-12-10 04:14:29.227024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.152 [2024-12-10 04:14:29.227030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.152 [2024-12-10 04:14:29.239140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.152 [2024-12-10 04:14:29.239544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.152 [2024-12-10 04:14:29.239564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.152 [2024-12-10 04:14:29.239571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.152 [2024-12-10 04:14:29.239731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.152 [2024-12-10 04:14:29.239891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.152 [2024-12-10 04:14:29.239901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.152 [2024-12-10 04:14:29.239908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.152 [2024-12-10 04:14:29.239914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.152 [2024-12-10 04:14:29.251909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.152 [2024-12-10 04:14:29.252342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.152 [2024-12-10 04:14:29.252388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.152 [2024-12-10 04:14:29.252411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.152 [2024-12-10 04:14:29.252780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.152 [2024-12-10 04:14:29.252942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.152 [2024-12-10 04:14:29.252951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.152 [2024-12-10 04:14:29.252958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.152 [2024-12-10 04:14:29.252964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.152 [2024-12-10 04:14:29.264715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.152 [2024-12-10 04:14:29.265127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.152 [2024-12-10 04:14:29.265144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.152 [2024-12-10 04:14:29.265151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.152 [2024-12-10 04:14:29.265319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.152 [2024-12-10 04:14:29.265481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.152 [2024-12-10 04:14:29.265490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.152 [2024-12-10 04:14:29.265497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.152 [2024-12-10 04:14:29.265503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.152 [2024-12-10 04:14:29.277437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.152 [2024-12-10 04:14:29.277857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.152 [2024-12-10 04:14:29.277874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.152 [2024-12-10 04:14:29.277882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.153 [2024-12-10 04:14:29.278041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.153 [2024-12-10 04:14:29.278227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.153 [2024-12-10 04:14:29.278238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.153 [2024-12-10 04:14:29.278245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.153 [2024-12-10 04:14:29.278251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.153 [2024-12-10 04:14:29.290287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.153 [2024-12-10 04:14:29.290701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.153 [2024-12-10 04:14:29.290719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.153 [2024-12-10 04:14:29.290726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.153 [2024-12-10 04:14:29.290885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.153 [2024-12-10 04:14:29.291045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.153 [2024-12-10 04:14:29.291054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.153 [2024-12-10 04:14:29.291060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.153 [2024-12-10 04:14:29.291066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.153 [2024-12-10 04:14:29.303074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.153 [2024-12-10 04:14:29.303508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.153 [2024-12-10 04:14:29.303525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.153 [2024-12-10 04:14:29.303533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.153 [2024-12-10 04:14:29.303692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.153 [2024-12-10 04:14:29.303852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.153 [2024-12-10 04:14:29.303861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.153 [2024-12-10 04:14:29.303867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.153 [2024-12-10 04:14:29.303873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.153 [2024-12-10 04:14:29.315862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.153 [2024-12-10 04:14:29.316310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.153 [2024-12-10 04:14:29.316356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.153 [2024-12-10 04:14:29.316379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.153 [2024-12-10 04:14:29.316878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.153 [2024-12-10 04:14:29.317039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.153 [2024-12-10 04:14:29.317048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.153 [2024-12-10 04:14:29.317058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.153 [2024-12-10 04:14:29.317065] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.153 [2024-12-10 04:14:29.328693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.153 [2024-12-10 04:14:29.329109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.153 [2024-12-10 04:14:29.329126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.153 [2024-12-10 04:14:29.329134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.153 [2024-12-10 04:14:29.329319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.153 [2024-12-10 04:14:29.329488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.153 [2024-12-10 04:14:29.329498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.153 [2024-12-10 04:14:29.329504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.153 [2024-12-10 04:14:29.329511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.153 [2024-12-10 04:14:29.341460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.153 [2024-12-10 04:14:29.341866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.153 [2024-12-10 04:14:29.341883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.153 [2024-12-10 04:14:29.341891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.153 [2024-12-10 04:14:29.342049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.153 [2024-12-10 04:14:29.342215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.153 [2024-12-10 04:14:29.342225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.153 [2024-12-10 04:14:29.342232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.153 [2024-12-10 04:14:29.342238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.153 [2024-12-10 04:14:29.354278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.153 [2024-12-10 04:14:29.354698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.153 [2024-12-10 04:14:29.354744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.153 [2024-12-10 04:14:29.354768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.153 [2024-12-10 04:14:29.355311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.153 [2024-12-10 04:14:29.355481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.153 [2024-12-10 04:14:29.355489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.153 [2024-12-10 04:14:29.355496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.153 [2024-12-10 04:14:29.355502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.153 [2024-12-10 04:14:29.367072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.153 [2024-12-10 04:14:29.367451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.153 [2024-12-10 04:14:29.367468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.153 [2024-12-10 04:14:29.367476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.153 [2024-12-10 04:14:29.367636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.153 [2024-12-10 04:14:29.367796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.153 [2024-12-10 04:14:29.367806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.153 [2024-12-10 04:14:29.367814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.153 [2024-12-10 04:14:29.367821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.153 [2024-12-10 04:14:29.379926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.153 [2024-12-10 04:14:29.380334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.153 [2024-12-10 04:14:29.380380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.153 [2024-12-10 04:14:29.380404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.153 [2024-12-10 04:14:29.380987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.153 [2024-12-10 04:14:29.381197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.153 [2024-12-10 04:14:29.381207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.153 [2024-12-10 04:14:29.381213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.154 [2024-12-10 04:14:29.381220] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.154 [2024-12-10 04:14:29.392739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.154 [2024-12-10 04:14:29.393074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.154 [2024-12-10 04:14:29.393091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.154 [2024-12-10 04:14:29.393099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.154 [2024-12-10 04:14:29.393264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.154 [2024-12-10 04:14:29.393425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.154 [2024-12-10 04:14:29.393434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.154 [2024-12-10 04:14:29.393440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.154 [2024-12-10 04:14:29.393447] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.154 [2024-12-10 04:14:29.405638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.154 [2024-12-10 04:14:29.406050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.154 [2024-12-10 04:14:29.406096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.154 [2024-12-10 04:14:29.406129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.154 [2024-12-10 04:14:29.406724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.154 [2024-12-10 04:14:29.407328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.154 [2024-12-10 04:14:29.407355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.154 [2024-12-10 04:14:29.407375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.154 [2024-12-10 04:14:29.407404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.154 [2024-12-10 04:14:29.418489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.154 [2024-12-10 04:14:29.418908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.154 [2024-12-10 04:14:29.418954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.154 [2024-12-10 04:14:29.418978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.154 [2024-12-10 04:14:29.419447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.154 [2024-12-10 04:14:29.419617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.154 [2024-12-10 04:14:29.419627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.154 [2024-12-10 04:14:29.419634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.154 [2024-12-10 04:14:29.419640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.414 [2024-12-10 04:14:29.431467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.414 [2024-12-10 04:14:29.431869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-12-10 04:14:29.431886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.414 [2024-12-10 04:14:29.431895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.414 [2024-12-10 04:14:29.432063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.414 [2024-12-10 04:14:29.432240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.414 [2024-12-10 04:14:29.432251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.414 [2024-12-10 04:14:29.432258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.414 [2024-12-10 04:14:29.432265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.414 [2024-12-10 04:14:29.444445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.414 [2024-12-10 04:14:29.444870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-12-10 04:14:29.444888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.414 [2024-12-10 04:14:29.444896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.414 [2024-12-10 04:14:29.445070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.414 [2024-12-10 04:14:29.445254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.414 [2024-12-10 04:14:29.445264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.414 [2024-12-10 04:14:29.445271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.414 [2024-12-10 04:14:29.445278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.414 [2024-12-10 04:14:29.457507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.414 [2024-12-10 04:14:29.457936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.414 [2024-12-10 04:14:29.457954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.414 [2024-12-10 04:14:29.457962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.414 [2024-12-10 04:14:29.458136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.414 [2024-12-10 04:14:29.458315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.414 [2024-12-10 04:14:29.458326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.414 [2024-12-10 04:14:29.458332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.414 [2024-12-10 04:14:29.458339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.415 [2024-12-10 04:14:29.470541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.415 [2024-12-10 04:14:29.470969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-12-10 04:14:29.470986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.415 [2024-12-10 04:14:29.470994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.415 [2024-12-10 04:14:29.471162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.415 [2024-12-10 04:14:29.471337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.415 [2024-12-10 04:14:29.471347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.415 [2024-12-10 04:14:29.471354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.415 [2024-12-10 04:14:29.471360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.415 [2024-12-10 04:14:29.483297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.415 [2024-12-10 04:14:29.483724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-12-10 04:14:29.483773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.415 [2024-12-10 04:14:29.483798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.415 [2024-12-10 04:14:29.484395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.415 [2024-12-10 04:14:29.484586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.415 [2024-12-10 04:14:29.484595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.415 [2024-12-10 04:14:29.484605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.415 [2024-12-10 04:14:29.484612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.415 [2024-12-10 04:14:29.496072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.415 [2024-12-10 04:14:29.496496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-12-10 04:14:29.496514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.415 [2024-12-10 04:14:29.496522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.415 [2024-12-10 04:14:29.496690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.415 [2024-12-10 04:14:29.496869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.415 [2024-12-10 04:14:29.496878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.415 [2024-12-10 04:14:29.496884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.415 [2024-12-10 04:14:29.496890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.415 [2024-12-10 04:14:29.508792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.415 [2024-12-10 04:14:29.509212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-12-10 04:14:29.509258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.415 [2024-12-10 04:14:29.509282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.415 [2024-12-10 04:14:29.509726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.415 [2024-12-10 04:14:29.509887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.415 [2024-12-10 04:14:29.509895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.415 [2024-12-10 04:14:29.509901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.415 [2024-12-10 04:14:29.509907] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.415 [2024-12-10 04:14:29.521543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.415 [2024-12-10 04:14:29.521889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-12-10 04:14:29.521907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.415 [2024-12-10 04:14:29.521914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.415 [2024-12-10 04:14:29.522073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.415 [2024-12-10 04:14:29.522257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.415 [2024-12-10 04:14:29.522266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.415 [2024-12-10 04:14:29.522274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.415 [2024-12-10 04:14:29.522281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.415 10026.00 IOPS, 39.16 MiB/s [2024-12-10T03:14:29.701Z] [2024-12-10 04:14:29.534410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.415 [2024-12-10 04:14:29.534828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-12-10 04:14:29.534873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.415 [2024-12-10 04:14:29.534897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.415 [2024-12-10 04:14:29.535420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.415 [2024-12-10 04:14:29.535582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.415 [2024-12-10 04:14:29.535592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.415 [2024-12-10 04:14:29.535598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.415 [2024-12-10 04:14:29.535604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.415 [2024-12-10 04:14:29.547255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.415 [2024-12-10 04:14:29.547622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-12-10 04:14:29.547654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.415 [2024-12-10 04:14:29.547678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.415 [2024-12-10 04:14:29.548276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.415 [2024-12-10 04:14:29.548749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.415 [2024-12-10 04:14:29.548758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.415 [2024-12-10 04:14:29.548764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.415 [2024-12-10 04:14:29.548770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.415 [2024-12-10 04:14:29.560047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.415 [2024-12-10 04:14:29.560476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-12-10 04:14:29.560493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.415 [2024-12-10 04:14:29.560501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.415 [2024-12-10 04:14:29.560659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.415 [2024-12-10 04:14:29.560819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.415 [2024-12-10 04:14:29.560828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.415 [2024-12-10 04:14:29.560834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.415 [2024-12-10 04:14:29.560840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.415 [2024-12-10 04:14:29.572832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.415 [2024-12-10 04:14:29.573242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-12-10 04:14:29.573292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.415 [2024-12-10 04:14:29.573324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.415 [2024-12-10 04:14:29.573868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.415 [2024-12-10 04:14:29.574029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.415 [2024-12-10 04:14:29.574039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.415 [2024-12-10 04:14:29.574045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.415 [2024-12-10 04:14:29.574051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.415 [2024-12-10 04:14:29.585658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.415 [2024-12-10 04:14:29.586078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.415 [2024-12-10 04:14:29.586123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.415 [2024-12-10 04:14:29.586147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.415 [2024-12-10 04:14:29.586594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.415 [2024-12-10 04:14:29.586766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.415 [2024-12-10 04:14:29.586776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.415 [2024-12-10 04:14:29.586782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.415 [2024-12-10 04:14:29.586789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.415 [2024-12-10 04:14:29.598563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.415 [2024-12-10 04:14:29.598974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-12-10 04:14:29.598991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.416 [2024-12-10 04:14:29.598998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.416 [2024-12-10 04:14:29.599157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.416 [2024-12-10 04:14:29.599347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.416 [2024-12-10 04:14:29.599357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.416 [2024-12-10 04:14:29.599364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.416 [2024-12-10 04:14:29.599370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.416 [2024-12-10 04:14:29.611333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.416 [2024-12-10 04:14:29.611722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-12-10 04:14:29.611739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.416 [2024-12-10 04:14:29.611746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.416 [2024-12-10 04:14:29.611905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.416 [2024-12-10 04:14:29.612069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.416 [2024-12-10 04:14:29.612078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.416 [2024-12-10 04:14:29.612085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.416 [2024-12-10 04:14:29.612093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.416 [2024-12-10 04:14:29.624171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.416 [2024-12-10 04:14:29.624592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-12-10 04:14:29.624637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.416 [2024-12-10 04:14:29.624661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.416 [2024-12-10 04:14:29.625091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.416 [2024-12-10 04:14:29.625277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.416 [2024-12-10 04:14:29.625287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.416 [2024-12-10 04:14:29.625294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.416 [2024-12-10 04:14:29.625300] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.416 [2024-12-10 04:14:29.637013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.416 [2024-12-10 04:14:29.637433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-12-10 04:14:29.637478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.416 [2024-12-10 04:14:29.637502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.416 [2024-12-10 04:14:29.637957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.416 [2024-12-10 04:14:29.638118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.416 [2024-12-10 04:14:29.638127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.416 [2024-12-10 04:14:29.638133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.416 [2024-12-10 04:14:29.638139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.416 [2024-12-10 04:14:29.649825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.416 [2024-12-10 04:14:29.650214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-12-10 04:14:29.650232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.416 [2024-12-10 04:14:29.650240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.416 [2024-12-10 04:14:29.650400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.416 [2024-12-10 04:14:29.650560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.416 [2024-12-10 04:14:29.650569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.416 [2024-12-10 04:14:29.650582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.416 [2024-12-10 04:14:29.650589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.416 [2024-12-10 04:14:29.662662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.416 [2024-12-10 04:14:29.663080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-12-10 04:14:29.663097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.416 [2024-12-10 04:14:29.663105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.416 [2024-12-10 04:14:29.663711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.416 [2024-12-10 04:14:29.664274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.416 [2024-12-10 04:14:29.664293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.416 [2024-12-10 04:14:29.664307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.416 [2024-12-10 04:14:29.664321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.416 [2024-12-10 04:14:29.677764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.416 [2024-12-10 04:14:29.678287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-12-10 04:14:29.678310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.416 [2024-12-10 04:14:29.678321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.416 [2024-12-10 04:14:29.678575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.416 [2024-12-10 04:14:29.678831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.416 [2024-12-10 04:14:29.678844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.416 [2024-12-10 04:14:29.678854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.416 [2024-12-10 04:14:29.678863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.416 [2024-12-10 04:14:29.690848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.416 [2024-12-10 04:14:29.691207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.416 [2024-12-10 04:14:29.691226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.416 [2024-12-10 04:14:29.691234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.416 [2024-12-10 04:14:29.691408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.416 [2024-12-10 04:14:29.691582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.416 [2024-12-10 04:14:29.691592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.416 [2024-12-10 04:14:29.691599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.416 [2024-12-10 04:14:29.691606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.676 [2024-12-10 04:14:29.703872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.677 [2024-12-10 04:14:29.704278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.677 [2024-12-10 04:14:29.704296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.677 [2024-12-10 04:14:29.704303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.677 [2024-12-10 04:14:29.704479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.677 [2024-12-10 04:14:29.704640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.677 [2024-12-10 04:14:29.704650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.677 [2024-12-10 04:14:29.704656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.677 [2024-12-10 04:14:29.704662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.677 [2024-12-10 04:14:29.716770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.677 [2024-12-10 04:14:29.717175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.677 [2024-12-10 04:14:29.717219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.677 [2024-12-10 04:14:29.717244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.677 [2024-12-10 04:14:29.717825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.677 [2024-12-10 04:14:29.717986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.677 [2024-12-10 04:14:29.717996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.677 [2024-12-10 04:14:29.718002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.677 [2024-12-10 04:14:29.718008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.677 [2024-12-10 04:14:29.729553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.677 [2024-12-10 04:14:29.729958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.677 [2024-12-10 04:14:29.729975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.677 [2024-12-10 04:14:29.729982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.677 [2024-12-10 04:14:29.730141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.677 [2024-12-10 04:14:29.730306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.677 [2024-12-10 04:14:29.730316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.677 [2024-12-10 04:14:29.730322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.677 [2024-12-10 04:14:29.730328] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.677 [2024-12-10 04:14:29.742342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.677 [2024-12-10 04:14:29.742692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.677 [2024-12-10 04:14:29.742736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.677 [2024-12-10 04:14:29.742766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.677 [2024-12-10 04:14:29.743255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.677 [2024-12-10 04:14:29.743416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.677 [2024-12-10 04:14:29.743425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.677 [2024-12-10 04:14:29.743431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.677 [2024-12-10 04:14:29.743438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.677 [2024-12-10 04:14:29.755258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.677 [2024-12-10 04:14:29.755687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.677 [2024-12-10 04:14:29.755704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.677 [2024-12-10 04:14:29.755712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.677 [2024-12-10 04:14:29.755871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.677 [2024-12-10 04:14:29.756032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.677 [2024-12-10 04:14:29.756041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.677 [2024-12-10 04:14:29.756048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.677 [2024-12-10 04:14:29.756054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.677 [2024-12-10 04:14:29.768150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.677 [2024-12-10 04:14:29.768580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.677 [2024-12-10 04:14:29.768600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.677 [2024-12-10 04:14:29.768609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.677 [2024-12-10 04:14:29.768770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.677 [2024-12-10 04:14:29.768931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.677 [2024-12-10 04:14:29.768940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.677 [2024-12-10 04:14:29.768947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.677 [2024-12-10 04:14:29.768953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.677 [2024-12-10 04:14:29.780966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.677 [2024-12-10 04:14:29.781379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.677 [2024-12-10 04:14:29.781397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.677 [2024-12-10 04:14:29.781405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.677 [2024-12-10 04:14:29.781565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.677 [2024-12-10 04:14:29.781728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.677 [2024-12-10 04:14:29.781738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.677 [2024-12-10 04:14:29.781744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.677 [2024-12-10 04:14:29.781751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.677 [2024-12-10 04:14:29.793838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.677 [2024-12-10 04:14:29.794183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.677 [2024-12-10 04:14:29.794201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.677 [2024-12-10 04:14:29.794209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.677 [2024-12-10 04:14:29.794369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.677 [2024-12-10 04:14:29.794528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.677 [2024-12-10 04:14:29.794537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.677 [2024-12-10 04:14:29.794544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.677 [2024-12-10 04:14:29.794550] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.677 [2024-12-10 04:14:29.806714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.677 [2024-12-10 04:14:29.807133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.677 [2024-12-10 04:14:29.807151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.677 [2024-12-10 04:14:29.807158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.677 [2024-12-10 04:14:29.807347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.677 [2024-12-10 04:14:29.807517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.677 [2024-12-10 04:14:29.807526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.677 [2024-12-10 04:14:29.807534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.677 [2024-12-10 04:14:29.807541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.677 [2024-12-10 04:14:29.819647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.677 [2024-12-10 04:14:29.820067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.677 [2024-12-10 04:14:29.820085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.677 [2024-12-10 04:14:29.820093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.677 [2024-12-10 04:14:29.820268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.677 [2024-12-10 04:14:29.820446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.677 [2024-12-10 04:14:29.820456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.677 [2024-12-10 04:14:29.820466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.677 [2024-12-10 04:14:29.820473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.678 [2024-12-10 04:14:29.832489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.678 [2024-12-10 04:14:29.832885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.678 [2024-12-10 04:14:29.832903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.678 [2024-12-10 04:14:29.832910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.678 [2024-12-10 04:14:29.833070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.678 [2024-12-10 04:14:29.833234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.678 [2024-12-10 04:14:29.833244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.678 [2024-12-10 04:14:29.833250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.678 [2024-12-10 04:14:29.833257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.678 [2024-12-10 04:14:29.845354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.678 [2024-12-10 04:14:29.845749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.678 [2024-12-10 04:14:29.845766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.678 [2024-12-10 04:14:29.845774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.678 [2024-12-10 04:14:29.845933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.678 [2024-12-10 04:14:29.846092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.678 [2024-12-10 04:14:29.846102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.678 [2024-12-10 04:14:29.846108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.678 [2024-12-10 04:14:29.846114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.678 [2024-12-10 04:14:29.858241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.678 [2024-12-10 04:14:29.858602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.678 [2024-12-10 04:14:29.858647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.678 [2024-12-10 04:14:29.858670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.678 [2024-12-10 04:14:29.859270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.678 [2024-12-10 04:14:29.859662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.678 [2024-12-10 04:14:29.859672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.678 [2024-12-10 04:14:29.859679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.678 [2024-12-10 04:14:29.859685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.678 [2024-12-10 04:14:29.871150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.678 [2024-12-10 04:14:29.871566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.678 [2024-12-10 04:14:29.871582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.678 [2024-12-10 04:14:29.871589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.678 [2024-12-10 04:14:29.871748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.678 [2024-12-10 04:14:29.871908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.678 [2024-12-10 04:14:29.871917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.678 [2024-12-10 04:14:29.871924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.678 [2024-12-10 04:14:29.871929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.678 [2024-12-10 04:14:29.884029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.678 [2024-12-10 04:14:29.884356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.678 [2024-12-10 04:14:29.884373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.678 [2024-12-10 04:14:29.884380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.678 [2024-12-10 04:14:29.884539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.678 [2024-12-10 04:14:29.884699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.678 [2024-12-10 04:14:29.884708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.678 [2024-12-10 04:14:29.884715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.678 [2024-12-10 04:14:29.884721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.678 [2024-12-10 04:14:29.896966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.678 [2024-12-10 04:14:29.897395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.678 [2024-12-10 04:14:29.897414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.678 [2024-12-10 04:14:29.897421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.678 [2024-12-10 04:14:29.897596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.678 [2024-12-10 04:14:29.897756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.678 [2024-12-10 04:14:29.897766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.678 [2024-12-10 04:14:29.897772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.678 [2024-12-10 04:14:29.897778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.678 [2024-12-10 04:14:29.909748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.678 [2024-12-10 04:14:29.910177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.678 [2024-12-10 04:14:29.910195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.678 [2024-12-10 04:14:29.910206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.678 [2024-12-10 04:14:29.910375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.678 [2024-12-10 04:14:29.910549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.678 [2024-12-10 04:14:29.910558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.678 [2024-12-10 04:14:29.910564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.678 [2024-12-10 04:14:29.910570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.678 [2024-12-10 04:14:29.922516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.678 [2024-12-10 04:14:29.922927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.678 [2024-12-10 04:14:29.922945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.678 [2024-12-10 04:14:29.922953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.678 [2024-12-10 04:14:29.923121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.678 [2024-12-10 04:14:29.923298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.678 [2024-12-10 04:14:29.923309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.678 [2024-12-10 04:14:29.923315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.678 [2024-12-10 04:14:29.923322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.678 [2024-12-10 04:14:29.935298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.678 [2024-12-10 04:14:29.935627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.678 [2024-12-10 04:14:29.935645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.678 [2024-12-10 04:14:29.935653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.678 [2024-12-10 04:14:29.935822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.678 [2024-12-10 04:14:29.935992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.678 [2024-12-10 04:14:29.936002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.678 [2024-12-10 04:14:29.936008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.678 [2024-12-10 04:14:29.936015] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.678 [2024-12-10 04:14:29.948106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.678 [2024-12-10 04:14:29.948565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.678 [2024-12-10 04:14:29.948583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.678 [2024-12-10 04:14:29.948591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.678 [2024-12-10 04:14:29.948760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.678 [2024-12-10 04:14:29.948932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.678 [2024-12-10 04:14:29.948942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.678 [2024-12-10 04:14:29.948948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.678 [2024-12-10 04:14:29.948955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.939 [2024-12-10 04:14:29.961180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.939 [2024-12-10 04:14:29.961606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.939 [2024-12-10 04:14:29.961653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.939 [2024-12-10 04:14:29.961677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.939 [2024-12-10 04:14:29.962273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.939 [2024-12-10 04:14:29.962823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.939 [2024-12-10 04:14:29.962833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.939 [2024-12-10 04:14:29.962839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.939 [2024-12-10 04:14:29.962846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.939 [2024-12-10 04:14:29.974045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.939 [2024-12-10 04:14:29.974363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.939 [2024-12-10 04:14:29.974380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.939 [2024-12-10 04:14:29.974388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.939 [2024-12-10 04:14:29.974548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.939 [2024-12-10 04:14:29.974708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.939 [2024-12-10 04:14:29.974717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.939 [2024-12-10 04:14:29.974724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.939 [2024-12-10 04:14:29.974731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.939 [2024-12-10 04:14:29.986837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.939 [2024-12-10 04:14:29.987252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.939 [2024-12-10 04:14:29.987271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.939 [2024-12-10 04:14:29.987278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.939 [2024-12-10 04:14:29.987438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.939 [2024-12-10 04:14:29.987598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.939 [2024-12-10 04:14:29.987607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.939 [2024-12-10 04:14:29.987614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.939 [2024-12-10 04:14:29.987623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.939 [2024-12-10 04:14:29.999767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.939 [2024-12-10 04:14:30.000189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.939 [2024-12-10 04:14:30.000206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.939 [2024-12-10 04:14:30.000214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.939 [2024-12-10 04:14:30.000382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.939 [2024-12-10 04:14:30.000551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.939 [2024-12-10 04:14:30.000560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.939 [2024-12-10 04:14:30.000567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.939 [2024-12-10 04:14:30.000574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.939 [2024-12-10 04:14:30.013726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.939 [2024-12-10 04:14:30.014171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.939 [2024-12-10 04:14:30.014190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.939 [2024-12-10 04:14:30.014198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.939 [2024-12-10 04:14:30.014373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.939 [2024-12-10 04:14:30.014548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.939 [2024-12-10 04:14:30.014557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.939 [2024-12-10 04:14:30.014564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.939 [2024-12-10 04:14:30.014571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.939 [2024-12-10 04:14:30.026789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.939 [2024-12-10 04:14:30.027223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.939 [2024-12-10 04:14:30.027242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.939 [2024-12-10 04:14:30.027251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.939 [2024-12-10 04:14:30.027425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.939 [2024-12-10 04:14:30.027599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.939 [2024-12-10 04:14:30.027609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.939 [2024-12-10 04:14:30.027615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.939 [2024-12-10 04:14:30.027622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.939 [2024-12-10 04:14:30.040575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.939 [2024-12-10 04:14:30.041060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.939 [2024-12-10 04:14:30.041090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.939 [2024-12-10 04:14:30.041102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.939 [2024-12-10 04:14:30.041311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.939 [2024-12-10 04:14:30.041510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.939 [2024-12-10 04:14:30.041526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.939 [2024-12-10 04:14:30.041537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.939 [2024-12-10 04:14:30.041548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.939 [2024-12-10 04:14:30.053653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.939 [2024-12-10 04:14:30.054059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.939 [2024-12-10 04:14:30.054079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.939 [2024-12-10 04:14:30.054087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.939 [2024-12-10 04:14:30.054269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.939 [2024-12-10 04:14:30.054445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.939 [2024-12-10 04:14:30.054454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.939 [2024-12-10 04:14:30.054461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.939 [2024-12-10 04:14:30.054468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.940 [2024-12-10 04:14:30.066738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.940 [2024-12-10 04:14:30.067175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.940 [2024-12-10 04:14:30.067194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.940 [2024-12-10 04:14:30.067203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.940 [2024-12-10 04:14:30.067377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.940 [2024-12-10 04:14:30.067552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.940 [2024-12-10 04:14:30.067562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.940 [2024-12-10 04:14:30.067569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.940 [2024-12-10 04:14:30.067576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.940 [2024-12-10 04:14:30.081054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.940 [2024-12-10 04:14:30.081540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.940 [2024-12-10 04:14:30.081560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.940 [2024-12-10 04:14:30.081569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.940 [2024-12-10 04:14:30.081752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.940 [2024-12-10 04:14:30.081940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.940 [2024-12-10 04:14:30.081950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.940 [2024-12-10 04:14:30.081957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.940 [2024-12-10 04:14:30.081963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.940 [2024-12-10 04:14:30.094187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.940 [2024-12-10 04:14:30.094570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.940 [2024-12-10 04:14:30.094589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.940 [2024-12-10 04:14:30.094596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.940 [2024-12-10 04:14:30.094777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.940 [2024-12-10 04:14:30.094954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.940 [2024-12-10 04:14:30.094964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.940 [2024-12-10 04:14:30.094970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.940 [2024-12-10 04:14:30.094977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.940 [2024-12-10 04:14:30.107210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.940 [2024-12-10 04:14:30.107638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.940 [2024-12-10 04:14:30.107656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.940 [2024-12-10 04:14:30.107664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.940 [2024-12-10 04:14:30.107839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.940 [2024-12-10 04:14:30.108013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.940 [2024-12-10 04:14:30.108023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.940 [2024-12-10 04:14:30.108031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.940 [2024-12-10 04:14:30.108039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.940 [2024-12-10 04:14:30.120265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.940 [2024-12-10 04:14:30.120597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.940 [2024-12-10 04:14:30.120615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.940 [2024-12-10 04:14:30.120623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.940 [2024-12-10 04:14:30.120797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.940 [2024-12-10 04:14:30.120971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.940 [2024-12-10 04:14:30.120985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.940 [2024-12-10 04:14:30.120992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.940 [2024-12-10 04:14:30.120998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.940 [2024-12-10 04:14:30.133383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.940 [2024-12-10 04:14:30.133722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.940 [2024-12-10 04:14:30.133740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.940 [2024-12-10 04:14:30.133748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.940 [2024-12-10 04:14:30.133922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.940 [2024-12-10 04:14:30.134097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.940 [2024-12-10 04:14:30.134107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.940 [2024-12-10 04:14:30.134114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.940 [2024-12-10 04:14:30.134121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.940 [2024-12-10 04:14:30.146500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.940 [2024-12-10 04:14:30.146938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.940 [2024-12-10 04:14:30.146984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.940 [2024-12-10 04:14:30.147008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.940 [2024-12-10 04:14:30.147605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.940 [2024-12-10 04:14:30.148176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.940 [2024-12-10 04:14:30.148187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.940 [2024-12-10 04:14:30.148193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.940 [2024-12-10 04:14:30.148200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.940 [2024-12-10 04:14:30.159580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.940 [2024-12-10 04:14:30.159864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.940 [2024-12-10 04:14:30.159882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.940 [2024-12-10 04:14:30.159890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.940 [2024-12-10 04:14:30.160063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.940 [2024-12-10 04:14:30.160242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.940 [2024-12-10 04:14:30.160252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.940 [2024-12-10 04:14:30.160259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.940 [2024-12-10 04:14:30.160270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.940 [2024-12-10 04:14:30.172602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.940 [2024-12-10 04:14:30.172942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.940 [2024-12-10 04:14:30.172996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.940 [2024-12-10 04:14:30.173019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.940 [2024-12-10 04:14:30.173617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.940 [2024-12-10 04:14:30.174216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.940 [2024-12-10 04:14:30.174255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.940 [2024-12-10 04:14:30.174263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.940 [2024-12-10 04:14:30.174270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.940 [2024-12-10 04:14:30.185687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.940 [2024-12-10 04:14:30.186045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.940 [2024-12-10 04:14:30.186063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.940 [2024-12-10 04:14:30.186071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.940 [2024-12-10 04:14:30.186248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.940 [2024-12-10 04:14:30.186422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.940 [2024-12-10 04:14:30.186432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.940 [2024-12-10 04:14:30.186439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.941 [2024-12-10 04:14:30.186446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.941 [2024-12-10 04:14:30.198857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.941 [2024-12-10 04:14:30.199200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.941 [2024-12-10 04:14:30.199219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.941 [2024-12-10 04:14:30.199227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.941 [2024-12-10 04:14:30.199426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.941 [2024-12-10 04:14:30.199610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.941 [2024-12-10 04:14:30.199620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.941 [2024-12-10 04:14:30.199627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.941 [2024-12-10 04:14:30.199635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.941 [2024-12-10 04:14:30.212377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.941 [2024-12-10 04:14:30.212828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.941 [2024-12-10 04:14:30.212847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:30.941 [2024-12-10 04:14:30.212855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:30.941 [2024-12-10 04:14:30.213056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:30.941 [2024-12-10 04:14:30.213269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.941 [2024-12-10 04:14:30.213281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.941 [2024-12-10 04:14:30.213288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.941 [2024-12-10 04:14:30.213295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.201 [2024-12-10 04:14:30.225494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.201 [2024-12-10 04:14:30.225910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.201 [2024-12-10 04:14:30.225928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.201 [2024-12-10 04:14:30.225936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.201 [2024-12-10 04:14:30.226110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.201 [2024-12-10 04:14:30.226291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.201 [2024-12-10 04:14:30.226301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.201 [2024-12-10 04:14:30.226308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.201 [2024-12-10 04:14:30.226315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.201 [2024-12-10 04:14:30.238521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.201 [2024-12-10 04:14:30.238853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.201 [2024-12-10 04:14:30.238892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.201 [2024-12-10 04:14:30.238917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.201 [2024-12-10 04:14:30.239480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.201 [2024-12-10 04:14:30.239656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.201 [2024-12-10 04:14:30.239666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.201 [2024-12-10 04:14:30.239672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.201 [2024-12-10 04:14:30.239680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.201 [2024-12-10 04:14:30.251525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.201 [2024-12-10 04:14:30.251810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.201 [2024-12-10 04:14:30.251828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.201 [2024-12-10 04:14:30.251836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.201 [2024-12-10 04:14:30.252013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.201 [2024-12-10 04:14:30.252192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.201 [2024-12-10 04:14:30.252202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.201 [2024-12-10 04:14:30.252209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.201 [2024-12-10 04:14:30.252216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.201 [2024-12-10 04:14:30.264561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.201 [2024-12-10 04:14:30.264905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.201 [2024-12-10 04:14:30.264923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.201 [2024-12-10 04:14:30.264930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.201 [2024-12-10 04:14:30.265099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.201 [2024-12-10 04:14:30.265272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.201 [2024-12-10 04:14:30.265281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.201 [2024-12-10 04:14:30.265288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.201 [2024-12-10 04:14:30.265295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.201 [2024-12-10 04:14:30.277679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.201 [2024-12-10 04:14:30.277952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.201 [2024-12-10 04:14:30.277970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.201 [2024-12-10 04:14:30.277978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.201 [2024-12-10 04:14:30.278151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.201 [2024-12-10 04:14:30.278330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.201 [2024-12-10 04:14:30.278340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.201 [2024-12-10 04:14:30.278347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.201 [2024-12-10 04:14:30.278354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.201 [2024-12-10 04:14:30.290733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.201 [2024-12-10 04:14:30.291072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.201 [2024-12-10 04:14:30.291090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.201 [2024-12-10 04:14:30.291098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.201 [2024-12-10 04:14:30.291275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.201 [2024-12-10 04:14:30.291450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.201 [2024-12-10 04:14:30.291465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.201 [2024-12-10 04:14:30.291472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.201 [2024-12-10 04:14:30.291479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.201 [2024-12-10 04:14:30.303844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.201 [2024-12-10 04:14:30.304205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.201 [2024-12-10 04:14:30.304224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.201 [2024-12-10 04:14:30.304232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.201 [2024-12-10 04:14:30.304405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.201 [2024-12-10 04:14:30.304580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.201 [2024-12-10 04:14:30.304590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.201 [2024-12-10 04:14:30.304597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.201 [2024-12-10 04:14:30.304604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.201 [2024-12-10 04:14:30.316829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.201 [2024-12-10 04:14:30.317119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.201 [2024-12-10 04:14:30.317138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.201 [2024-12-10 04:14:30.317146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.201 [2024-12-10 04:14:30.317324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.201 [2024-12-10 04:14:30.317499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.201 [2024-12-10 04:14:30.317509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.201 [2024-12-10 04:14:30.317516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.201 [2024-12-10 04:14:30.317523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.202 [2024-12-10 04:14:30.329890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.202 [2024-12-10 04:14:30.330227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.202 [2024-12-10 04:14:30.330245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.202 [2024-12-10 04:14:30.330253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.202 [2024-12-10 04:14:30.330428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.202 [2024-12-10 04:14:30.330601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.202 [2024-12-10 04:14:30.330610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.202 [2024-12-10 04:14:30.330617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.202 [2024-12-10 04:14:30.330628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.202 [2024-12-10 04:14:30.343007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.202 [2024-12-10 04:14:30.343349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.202 [2024-12-10 04:14:30.343367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.202 [2024-12-10 04:14:30.343375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.202 [2024-12-10 04:14:30.343548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.202 [2024-12-10 04:14:30.343723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.202 [2024-12-10 04:14:30.343733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.202 [2024-12-10 04:14:30.343741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.202 [2024-12-10 04:14:30.343748] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.202 [2024-12-10 04:14:30.356118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.202 [2024-12-10 04:14:30.356405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.202 [2024-12-10 04:14:30.356450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.202 [2024-12-10 04:14:30.356473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.202 [2024-12-10 04:14:30.357056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.202 [2024-12-10 04:14:30.357460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.202 [2024-12-10 04:14:30.357471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.202 [2024-12-10 04:14:30.357478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.202 [2024-12-10 04:14:30.357484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.202 [2024-12-10 04:14:30.369207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.202 [2024-12-10 04:14:30.369568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.202 [2024-12-10 04:14:30.369585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.202 [2024-12-10 04:14:30.369593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.202 [2024-12-10 04:14:30.369766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.202 [2024-12-10 04:14:30.369939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.202 [2024-12-10 04:14:30.369949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.202 [2024-12-10 04:14:30.369956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.202 [2024-12-10 04:14:30.369963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.202 [2024-12-10 04:14:30.382177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.202 [2024-12-10 04:14:30.382468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.202 [2024-12-10 04:14:30.382488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.202 [2024-12-10 04:14:30.382496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.202 [2024-12-10 04:14:30.382675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.202 [2024-12-10 04:14:30.382850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.202 [2024-12-10 04:14:30.382860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.202 [2024-12-10 04:14:30.382867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.202 [2024-12-10 04:14:30.382874] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.202 [2024-12-10 04:14:30.395253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.202 [2024-12-10 04:14:30.395616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.202 [2024-12-10 04:14:30.395634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.202 [2024-12-10 04:14:30.395642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.202 [2024-12-10 04:14:30.395816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.202 [2024-12-10 04:14:30.395990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.202 [2024-12-10 04:14:30.396000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.202 [2024-12-10 04:14:30.396006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.202 [2024-12-10 04:14:30.396013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.202 [2024-12-10 04:14:30.408243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.202 [2024-12-10 04:14:30.408651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.202 [2024-12-10 04:14:30.408669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.202 [2024-12-10 04:14:30.408677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.202 [2024-12-10 04:14:30.408849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.202 [2024-12-10 04:14:30.409023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.202 [2024-12-10 04:14:30.409033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.202 [2024-12-10 04:14:30.409039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.202 [2024-12-10 04:14:30.409046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.202 [2024-12-10 04:14:30.421259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.202 [2024-12-10 04:14:30.421550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.202 [2024-12-10 04:14:30.421567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.202 [2024-12-10 04:14:30.421575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.202 [2024-12-10 04:14:30.421752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.202 [2024-12-10 04:14:30.421927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.202 [2024-12-10 04:14:30.421937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.202 [2024-12-10 04:14:30.421943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.202 [2024-12-10 04:14:30.421951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.202 [2024-12-10 04:14:30.434328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.202 [2024-12-10 04:14:30.434666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.202 [2024-12-10 04:14:30.434684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.202 [2024-12-10 04:14:30.434692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.202 [2024-12-10 04:14:30.434866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.202 [2024-12-10 04:14:30.435039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.202 [2024-12-10 04:14:30.435049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.202 [2024-12-10 04:14:30.435056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.202 [2024-12-10 04:14:30.435063] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.202 [2024-12-10 04:14:30.447448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.202 [2024-12-10 04:14:30.447714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.202 [2024-12-10 04:14:30.447732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.202 [2024-12-10 04:14:30.447740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.202 [2024-12-10 04:14:30.447914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.202 [2024-12-10 04:14:30.448088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.202 [2024-12-10 04:14:30.448098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.202 [2024-12-10 04:14:30.448104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.202 [2024-12-10 04:14:30.448111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.202 [2024-12-10 04:14:30.460493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.202 [2024-12-10 04:14:30.460901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.203 [2024-12-10 04:14:30.460919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.203 [2024-12-10 04:14:30.460927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.203 [2024-12-10 04:14:30.461100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.203 [2024-12-10 04:14:30.461278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.203 [2024-12-10 04:14:30.461291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.203 [2024-12-10 04:14:30.461298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.203 [2024-12-10 04:14:30.461305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.203 [2024-12-10 04:14:30.473515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.203 [2024-12-10 04:14:30.473938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.203 [2024-12-10 04:14:30.473956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.203 [2024-12-10 04:14:30.473964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.203 [2024-12-10 04:14:30.474137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.203 [2024-12-10 04:14:30.474315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.203 [2024-12-10 04:14:30.474325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.203 [2024-12-10 04:14:30.474332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.203 [2024-12-10 04:14:30.474339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.471 [2024-12-10 04:14:30.486565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.471 [2024-12-10 04:14:30.486995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.471 [2024-12-10 04:14:30.487036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.471 [2024-12-10 04:14:30.487061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.471 [2024-12-10 04:14:30.487664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.471 [2024-12-10 04:14:30.487896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.471 [2024-12-10 04:14:30.487905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.471 [2024-12-10 04:14:30.487912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.471 [2024-12-10 04:14:30.487918] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.471 [2024-12-10 04:14:30.499641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.471 [2024-12-10 04:14:30.500062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.471 [2024-12-10 04:14:30.500079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.471 [2024-12-10 04:14:30.500087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.471 [2024-12-10 04:14:30.500266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.471 [2024-12-10 04:14:30.500440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.471 [2024-12-10 04:14:30.500450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.471 [2024-12-10 04:14:30.500456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.471 [2024-12-10 04:14:30.500463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.471 [2024-12-10 04:14:30.512710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.471 [2024-12-10 04:14:30.513112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.471 [2024-12-10 04:14:30.513129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.471 [2024-12-10 04:14:30.513137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.471 [2024-12-10 04:14:30.513331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.471 [2024-12-10 04:14:30.513505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.471 [2024-12-10 04:14:30.513515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.471 [2024-12-10 04:14:30.513522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.471 [2024-12-10 04:14:30.513528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.471 [2024-12-10 04:14:30.525684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.471 [2024-12-10 04:14:30.526123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.471 [2024-12-10 04:14:30.526141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.471 [2024-12-10 04:14:30.526148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.471 [2024-12-10 04:14:30.526342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.471 7519.50 IOPS, 29.37 MiB/s [2024-12-10T03:14:30.757Z] [2024-12-10 04:14:30.527775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.471 [2024-12-10 04:14:30.527784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.471 [2024-12-10 04:14:30.527790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.471 [2024-12-10 04:14:30.527796] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.471 [2024-12-10 04:14:30.538752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.471 [2024-12-10 04:14:30.539179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.471 [2024-12-10 04:14:30.539198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.471 [2024-12-10 04:14:30.539206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.471 [2024-12-10 04:14:30.539388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.471 [2024-12-10 04:14:30.539557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.471 [2024-12-10 04:14:30.539566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.471 [2024-12-10 04:14:30.539573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.471 [2024-12-10 04:14:30.539579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.471 [2024-12-10 04:14:30.551832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.471 [2024-12-10 04:14:30.552268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.471 [2024-12-10 04:14:30.552321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.471 [2024-12-10 04:14:30.552345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.471 [2024-12-10 04:14:30.552928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.471 [2024-12-10 04:14:30.553344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.471 [2024-12-10 04:14:30.553354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.471 [2024-12-10 04:14:30.553360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.471 [2024-12-10 04:14:30.553367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.471 [2024-12-10 04:14:30.564808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.471 [2024-12-10 04:14:30.565241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.471 [2024-12-10 04:14:30.565287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.472 [2024-12-10 04:14:30.565311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.472 [2024-12-10 04:14:30.565892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.472 [2024-12-10 04:14:30.566109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.472 [2024-12-10 04:14:30.566119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.472 [2024-12-10 04:14:30.566125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.472 [2024-12-10 04:14:30.566132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.472 [2024-12-10 04:14:30.577605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.472 [2024-12-10 04:14:30.578033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.472 [2024-12-10 04:14:30.578078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.472 [2024-12-10 04:14:30.578102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.472 [2024-12-10 04:14:30.578698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.472 [2024-12-10 04:14:30.579238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.472 [2024-12-10 04:14:30.579248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.472 [2024-12-10 04:14:30.579255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.472 [2024-12-10 04:14:30.579262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.472 [2024-12-10 04:14:30.590657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.472 [2024-12-10 04:14:30.590940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.472 [2024-12-10 04:14:30.590958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.472 [2024-12-10 04:14:30.590966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.472 [2024-12-10 04:14:30.591143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.472 [2024-12-10 04:14:30.591323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.472 [2024-12-10 04:14:30.591333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.472 [2024-12-10 04:14:30.591340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.472 [2024-12-10 04:14:30.591346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.472 [2024-12-10 04:14:30.603739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.472 [2024-12-10 04:14:30.604158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.472 [2024-12-10 04:14:30.604215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.472 [2024-12-10 04:14:30.604238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.472 [2024-12-10 04:14:30.604821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.472 [2024-12-10 04:14:30.605383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.472 [2024-12-10 04:14:30.605402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.472 [2024-12-10 04:14:30.605416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.472 [2024-12-10 04:14:30.605430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.472 [2024-12-10 04:14:30.618665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.472 [2024-12-10 04:14:30.619160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.472 [2024-12-10 04:14:30.619187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.472 [2024-12-10 04:14:30.619198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.472 [2024-12-10 04:14:30.619453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.472 [2024-12-10 04:14:30.619709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.472 [2024-12-10 04:14:30.619722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.472 [2024-12-10 04:14:30.619732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.472 [2024-12-10 04:14:30.619742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.472 [2024-12-10 04:14:30.631665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.472 [2024-12-10 04:14:30.632072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.472 [2024-12-10 04:14:30.632089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.472 [2024-12-10 04:14:30.632097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.472 [2024-12-10 04:14:30.632276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.472 [2024-12-10 04:14:30.632450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.472 [2024-12-10 04:14:30.632463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.472 [2024-12-10 04:14:30.632470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.472 [2024-12-10 04:14:30.632477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.472 [2024-12-10 04:14:30.644671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.472 [2024-12-10 04:14:30.645071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.472 [2024-12-10 04:14:30.645089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.472 [2024-12-10 04:14:30.645096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.472 [2024-12-10 04:14:30.645276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.472 [2024-12-10 04:14:30.645451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.472 [2024-12-10 04:14:30.645460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.472 [2024-12-10 04:14:30.645467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.472 [2024-12-10 04:14:30.645473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.472 [2024-12-10 04:14:30.657705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.472 [2024-12-10 04:14:30.658118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.472 [2024-12-10 04:14:30.658164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.472 [2024-12-10 04:14:30.658202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.472 [2024-12-10 04:14:30.658672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.472 [2024-12-10 04:14:30.658847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.472 [2024-12-10 04:14:30.658857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.472 [2024-12-10 04:14:30.658864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.472 [2024-12-10 04:14:30.658870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.472 [2024-12-10 04:14:30.670815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.472 [2024-12-10 04:14:30.671180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.472 [2024-12-10 04:14:30.671198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.472 [2024-12-10 04:14:30.671206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.472 [2024-12-10 04:14:30.671378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.472 [2024-12-10 04:14:30.671551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.472 [2024-12-10 04:14:30.671561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.472 [2024-12-10 04:14:30.671568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.472 [2024-12-10 04:14:30.671574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.472 [2024-12-10 04:14:30.683789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.472 [2024-12-10 04:14:30.684193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.472 [2024-12-10 04:14:30.684212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.472 [2024-12-10 04:14:30.684220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.472 [2024-12-10 04:14:30.684394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.472 [2024-12-10 04:14:30.684569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.472 [2024-12-10 04:14:30.684578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.472 [2024-12-10 04:14:30.684585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.472 [2024-12-10 04:14:30.684592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.473 [2024-12-10 04:14:30.696978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.473 [2024-12-10 04:14:30.697392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.473 [2024-12-10 04:14:30.697410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.473 [2024-12-10 04:14:30.697418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.473 [2024-12-10 04:14:30.697592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.473 [2024-12-10 04:14:30.697766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.473 [2024-12-10 04:14:30.697776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.473 [2024-12-10 04:14:30.697782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.473 [2024-12-10 04:14:30.697789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.473 [2024-12-10 04:14:30.709941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.473 [2024-12-10 04:14:30.710376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.473 [2024-12-10 04:14:30.710423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.473 [2024-12-10 04:14:30.710447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.473 [2024-12-10 04:14:30.711029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.473 [2024-12-10 04:14:30.711526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.473 [2024-12-10 04:14:30.711535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.473 [2024-12-10 04:14:30.711542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.473 [2024-12-10 04:14:30.711549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.473 [2024-12-10 04:14:30.722981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.473 [2024-12-10 04:14:30.723419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.473 [2024-12-10 04:14:30.723473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.473 [2024-12-10 04:14:30.723498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.473 [2024-12-10 04:14:30.724079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.473 [2024-12-10 04:14:30.724673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.473 [2024-12-10 04:14:30.724684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.473 [2024-12-10 04:14:30.724691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.473 [2024-12-10 04:14:30.724698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.473 [2024-12-10 04:14:30.736082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.473 [2024-12-10 04:14:30.736491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.473 [2024-12-10 04:14:30.736509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.473 [2024-12-10 04:14:30.736516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.473 [2024-12-10 04:14:30.736690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.473 [2024-12-10 04:14:30.736864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.473 [2024-12-10 04:14:30.736873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.473 [2024-12-10 04:14:30.736880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.473 [2024-12-10 04:14:30.736887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.787 [2024-12-10 04:14:30.749051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.787 [2024-12-10 04:14:30.749484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.787 [2024-12-10 04:14:30.749502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.787 [2024-12-10 04:14:30.749510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.787 [2024-12-10 04:14:30.749683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.787 [2024-12-10 04:14:30.749858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.787 [2024-12-10 04:14:30.749868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.787 [2024-12-10 04:14:30.749874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.787 [2024-12-10 04:14:30.749882] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.787 [2024-12-10 04:14:30.762093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.787 [2024-12-10 04:14:30.762524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.787 [2024-12-10 04:14:30.762543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.787 [2024-12-10 04:14:30.762552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.787 [2024-12-10 04:14:30.762725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.787 [2024-12-10 04:14:30.762903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.788 [2024-12-10 04:14:30.762913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.788 [2024-12-10 04:14:30.762920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.788 [2024-12-10 04:14:30.762927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.788 [2024-12-10 04:14:30.775175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.788 [2024-12-10 04:14:30.775539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.788 [2024-12-10 04:14:30.775558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.788 [2024-12-10 04:14:30.775565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.788 [2024-12-10 04:14:30.775739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.788 [2024-12-10 04:14:30.775914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.788 [2024-12-10 04:14:30.775924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.788 [2024-12-10 04:14:30.775931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.788 [2024-12-10 04:14:30.775938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.788 [2024-12-10 04:14:30.788238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.788 [2024-12-10 04:14:30.788631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.788 [2024-12-10 04:14:30.788678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.788 [2024-12-10 04:14:30.788701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.788 [2024-12-10 04:14:30.789300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.788 [2024-12-10 04:14:30.789835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.788 [2024-12-10 04:14:30.789845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.788 [2024-12-10 04:14:30.789852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.788 [2024-12-10 04:14:30.789859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.788 [2024-12-10 04:14:30.801339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.788 [2024-12-10 04:14:30.801772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.788 [2024-12-10 04:14:30.801791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.788 [2024-12-10 04:14:30.801799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.788 [2024-12-10 04:14:30.801972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.788 [2024-12-10 04:14:30.802147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.788 [2024-12-10 04:14:30.802157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.788 [2024-12-10 04:14:30.802172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.788 [2024-12-10 04:14:30.802180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.788 [2024-12-10 04:14:30.814410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.788 [2024-12-10 04:14:30.814816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.788 [2024-12-10 04:14:30.814833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.788 [2024-12-10 04:14:30.814840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.788 [2024-12-10 04:14:30.815008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.788 [2024-12-10 04:14:30.815182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.788 [2024-12-10 04:14:30.815208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.788 [2024-12-10 04:14:30.815216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.788 [2024-12-10 04:14:30.815223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.788 [2024-12-10 04:14:30.827299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.788 [2024-12-10 04:14:30.827726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.788 [2024-12-10 04:14:30.827743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.788 [2024-12-10 04:14:30.827751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.788 [2024-12-10 04:14:30.827919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.788 [2024-12-10 04:14:30.828087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.788 [2024-12-10 04:14:30.828097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.788 [2024-12-10 04:14:30.828103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.788 [2024-12-10 04:14:30.828110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.788 [2024-12-10 04:14:30.840094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.788 [2024-12-10 04:14:30.840460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.788 [2024-12-10 04:14:30.840478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.788 [2024-12-10 04:14:30.840485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.788 [2024-12-10 04:14:30.840653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.788 [2024-12-10 04:14:30.840822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.788 [2024-12-10 04:14:30.840832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.788 [2024-12-10 04:14:30.840838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.788 [2024-12-10 04:14:30.840845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.788 [2024-12-10 04:14:30.852938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.788 [2024-12-10 04:14:30.853361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.788 [2024-12-10 04:14:30.853379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.788 [2024-12-10 04:14:30.853387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.788 [2024-12-10 04:14:30.853557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.788 [2024-12-10 04:14:30.853717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.788 [2024-12-10 04:14:30.853726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.788 [2024-12-10 04:14:30.853733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.788 [2024-12-10 04:14:30.853739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.788 [2024-12-10 04:14:30.865827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.788 [2024-12-10 04:14:30.866174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.788 [2024-12-10 04:14:30.866192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.788 [2024-12-10 04:14:30.866200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.788 [2024-12-10 04:14:30.866362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.788 [2024-12-10 04:14:30.866522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.788 [2024-12-10 04:14:30.866531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.788 [2024-12-10 04:14:30.866538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.788 [2024-12-10 04:14:30.866545] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.788 [2024-12-10 04:14:30.878791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.788 [2024-12-10 04:14:30.879114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.788 [2024-12-10 04:14:30.879131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.788 [2024-12-10 04:14:30.879139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.788 [2024-12-10 04:14:30.879303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.788 [2024-12-10 04:14:30.879489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.788 [2024-12-10 04:14:30.879498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.788 [2024-12-10 04:14:30.879505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.788 [2024-12-10 04:14:30.879511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.788 [2024-12-10 04:14:30.891642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.788 [2024-12-10 04:14:30.891976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.788 [2024-12-10 04:14:30.891995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.788 [2024-12-10 04:14:30.892006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.788 [2024-12-10 04:14:30.892179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.788 [2024-12-10 04:14:30.892349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.788 [2024-12-10 04:14:30.892358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.788 [2024-12-10 04:14:30.892365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.789 [2024-12-10 04:14:30.892371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.789 [2024-12-10 04:14:30.904601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.789 [2024-12-10 04:14:30.904954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.789 [2024-12-10 04:14:30.904971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.789 [2024-12-10 04:14:30.904979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.789 [2024-12-10 04:14:30.905138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.789 [2024-12-10 04:14:30.905302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.789 [2024-12-10 04:14:30.905312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.789 [2024-12-10 04:14:30.905318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.789 [2024-12-10 04:14:30.905325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.789 [2024-12-10 04:14:30.917418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.789 [2024-12-10 04:14:30.917835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.789 [2024-12-10 04:14:30.917879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.789 [2024-12-10 04:14:30.917902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.789 [2024-12-10 04:14:30.918498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.789 [2024-12-10 04:14:30.919004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.789 [2024-12-10 04:14:30.919013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.789 [2024-12-10 04:14:30.919020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.789 [2024-12-10 04:14:30.919026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.789 [2024-12-10 04:14:30.930210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.789 [2024-12-10 04:14:30.930632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.789 [2024-12-10 04:14:30.930677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.789 [2024-12-10 04:14:30.930701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.789 [2024-12-10 04:14:30.931103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.789 [2024-12-10 04:14:30.931294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.789 [2024-12-10 04:14:30.931305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.789 [2024-12-10 04:14:30.931311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.789 [2024-12-10 04:14:30.931318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.789 [2024-12-10 04:14:30.943076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.789 [2024-12-10 04:14:30.943440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.789 [2024-12-10 04:14:30.943458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.789 [2024-12-10 04:14:30.943465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.789 [2024-12-10 04:14:30.943624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.789 [2024-12-10 04:14:30.943784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.789 [2024-12-10 04:14:30.943793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.789 [2024-12-10 04:14:30.943799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.789 [2024-12-10 04:14:30.943805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.789 [2024-12-10 04:14:30.955846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.789 [2024-12-10 04:14:30.956257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.789 [2024-12-10 04:14:30.956275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.789 [2024-12-10 04:14:30.956282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.789 [2024-12-10 04:14:30.956441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.789 [2024-12-10 04:14:30.956602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.789 [2024-12-10 04:14:30.956610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.789 [2024-12-10 04:14:30.956617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.789 [2024-12-10 04:14:30.956623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.789 [2024-12-10 04:14:30.968707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.789 [2024-12-10 04:14:30.969122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.789 [2024-12-10 04:14:30.969139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.789 [2024-12-10 04:14:30.969147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.789 [2024-12-10 04:14:30.969339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.789 [2024-12-10 04:14:30.969514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.789 [2024-12-10 04:14:30.969523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.789 [2024-12-10 04:14:30.969535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.789 [2024-12-10 04:14:30.969542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.789 [2024-12-10 04:14:30.981799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.789 [2024-12-10 04:14:30.982206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.789 [2024-12-10 04:14:30.982225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.789 [2024-12-10 04:14:30.982234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.789 [2024-12-10 04:14:30.982408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.789 [2024-12-10 04:14:30.982583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.789 [2024-12-10 04:14:30.982592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.789 [2024-12-10 04:14:30.982600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.789 [2024-12-10 04:14:30.982606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.789 [2024-12-10 04:14:30.994720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.789 [2024-12-10 04:14:30.995057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.789 [2024-12-10 04:14:30.995074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.789 [2024-12-10 04:14:30.995082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.789 [2024-12-10 04:14:30.995265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.789 [2024-12-10 04:14:30.995434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.789 [2024-12-10 04:14:30.995443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.789 [2024-12-10 04:14:30.995450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.789 [2024-12-10 04:14:30.995457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.789 [2024-12-10 04:14:31.007572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.789 [2024-12-10 04:14:31.007996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.789 [2024-12-10 04:14:31.008042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.789 [2024-12-10 04:14:31.008066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.789 [2024-12-10 04:14:31.008478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.789 [2024-12-10 04:14:31.008648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.789 [2024-12-10 04:14:31.008657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.789 [2024-12-10 04:14:31.008664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.789 [2024-12-10 04:14:31.008670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.789 [2024-12-10 04:14:31.020502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.789 [2024-12-10 04:14:31.020931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.789 [2024-12-10 04:14:31.020967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.789 [2024-12-10 04:14:31.020974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.789 [2024-12-10 04:14:31.021134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.789 [2024-12-10 04:14:31.021322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.789 [2024-12-10 04:14:31.021333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.789 [2024-12-10 04:14:31.021339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.789 [2024-12-10 04:14:31.021345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.789 [2024-12-10 04:14:31.033246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.790 [2024-12-10 04:14:31.033663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.790 [2024-12-10 04:14:31.033680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.790 [2024-12-10 04:14:31.033688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.790 [2024-12-10 04:14:31.033847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.790 [2024-12-10 04:14:31.034007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.790 [2024-12-10 04:14:31.034016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.790 [2024-12-10 04:14:31.034022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.790 [2024-12-10 04:14:31.034028] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.790 [2024-12-10 04:14:31.046098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.790 [2024-12-10 04:14:31.046511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.790 [2024-12-10 04:14:31.046527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.790 [2024-12-10 04:14:31.046535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.790 [2024-12-10 04:14:31.046694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.790 [2024-12-10 04:14:31.046854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.790 [2024-12-10 04:14:31.046863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.790 [2024-12-10 04:14:31.046870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.790 [2024-12-10 04:14:31.046876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.790 [2024-12-10 04:14:31.059187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.790 [2024-12-10 04:14:31.059613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.790 [2024-12-10 04:14:31.059631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:31.790 [2024-12-10 04:14:31.059643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:31.790 [2024-12-10 04:14:31.059817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:31.790 [2024-12-10 04:14:31.059991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.790 [2024-12-10 04:14:31.060000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.790 [2024-12-10 04:14:31.060007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.790 [2024-12-10 04:14:31.060013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.086 [2024-12-10 04:14:31.072190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.086 [2024-12-10 04:14:31.072609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.086 [2024-12-10 04:14:31.072626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.086 [2024-12-10 04:14:31.072634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.086 [2024-12-10 04:14:31.072803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.086 [2024-12-10 04:14:31.072971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.086 [2024-12-10 04:14:31.072981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.086 [2024-12-10 04:14:31.072988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.086 [2024-12-10 04:14:31.072994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.086 [2024-12-10 04:14:31.085285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.086 [2024-12-10 04:14:31.085714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.086 [2024-12-10 04:14:31.085733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.086 [2024-12-10 04:14:31.085741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.086 [2024-12-10 04:14:31.085914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.086 [2024-12-10 04:14:31.086087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.086 [2024-12-10 04:14:31.086097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.086 [2024-12-10 04:14:31.086104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.086 [2024-12-10 04:14:31.086110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.086 [2024-12-10 04:14:31.098290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.086 [2024-12-10 04:14:31.098697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.086 [2024-12-10 04:14:31.098714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.086 [2024-12-10 04:14:31.098723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.086 [2024-12-10 04:14:31.098895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.086 [2024-12-10 04:14:31.099073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.086 [2024-12-10 04:14:31.099083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.086 [2024-12-10 04:14:31.099090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.086 [2024-12-10 04:14:31.099097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.086 [2024-12-10 04:14:31.111323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.086 [2024-12-10 04:14:31.111739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.086 [2024-12-10 04:14:31.111756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.086 [2024-12-10 04:14:31.111764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.086 [2024-12-10 04:14:31.111933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.086 [2024-12-10 04:14:31.112101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.086 [2024-12-10 04:14:31.112111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.086 [2024-12-10 04:14:31.112118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.086 [2024-12-10 04:14:31.112124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.086 [2024-12-10 04:14:31.124150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.086 [2024-12-10 04:14:31.124471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.086 [2024-12-10 04:14:31.124489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.086 [2024-12-10 04:14:31.124497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.086 [2024-12-10 04:14:31.124655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.086 [2024-12-10 04:14:31.124815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.086 [2024-12-10 04:14:31.124824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.086 [2024-12-10 04:14:31.124830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.086 [2024-12-10 04:14:31.124837] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.086 [2024-12-10 04:14:31.136988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.086 [2024-12-10 04:14:31.137399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.086 [2024-12-10 04:14:31.137416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.086 [2024-12-10 04:14:31.137423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.086 [2024-12-10 04:14:31.137583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.086 [2024-12-10 04:14:31.137742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.086 [2024-12-10 04:14:31.137751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.086 [2024-12-10 04:14:31.137761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.086 [2024-12-10 04:14:31.137767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.086 [2024-12-10 04:14:31.149809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.086 [2024-12-10 04:14:31.150220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.086 [2024-12-10 04:14:31.150266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.086 [2024-12-10 04:14:31.150290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.086 [2024-12-10 04:14:31.150872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.086 [2024-12-10 04:14:31.151471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.086 [2024-12-10 04:14:31.151498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.086 [2024-12-10 04:14:31.151518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.086 [2024-12-10 04:14:31.151539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.086 [2024-12-10 04:14:31.165163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.086 [2024-12-10 04:14:31.165688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.086 [2024-12-10 04:14:31.165711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.087 [2024-12-10 04:14:31.165721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.087 [2024-12-10 04:14:31.165974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.087 [2024-12-10 04:14:31.166237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.087 [2024-12-10 04:14:31.166251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.087 [2024-12-10 04:14:31.166262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.087 [2024-12-10 04:14:31.166272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.087 [2024-12-10 04:14:31.178124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.087 [2024-12-10 04:14:31.178554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.087 [2024-12-10 04:14:31.178570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.087 [2024-12-10 04:14:31.178578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.087 [2024-12-10 04:14:31.178746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.087 [2024-12-10 04:14:31.178915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.087 [2024-12-10 04:14:31.178925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.087 [2024-12-10 04:14:31.178931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.087 [2024-12-10 04:14:31.178938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.087 [2024-12-10 04:14:31.191053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.087 [2024-12-10 04:14:31.191416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.087 [2024-12-10 04:14:31.191433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.087 [2024-12-10 04:14:31.191441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.087 [2024-12-10 04:14:31.191608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.087 [2024-12-10 04:14:31.191777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.087 [2024-12-10 04:14:31.191787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.087 [2024-12-10 04:14:31.191793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.087 [2024-12-10 04:14:31.191800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.087 [2024-12-10 04:14:31.203808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.087 [2024-12-10 04:14:31.204218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.087 [2024-12-10 04:14:31.204235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.087 [2024-12-10 04:14:31.204242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.087 [2024-12-10 04:14:31.204403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.087 [2024-12-10 04:14:31.204562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.087 [2024-12-10 04:14:31.204571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.087 [2024-12-10 04:14:31.204578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.087 [2024-12-10 04:14:31.204584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.087 [2024-12-10 04:14:31.216737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.087 [2024-12-10 04:14:31.217189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.087 [2024-12-10 04:14:31.217237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.087 [2024-12-10 04:14:31.217261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.087 [2024-12-10 04:14:31.217736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.087 [2024-12-10 04:14:31.217897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.087 [2024-12-10 04:14:31.217906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.087 [2024-12-10 04:14:31.217912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.087 [2024-12-10 04:14:31.217918] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.087 [2024-12-10 04:14:31.229544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.087 [2024-12-10 04:14:31.229969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.087 [2024-12-10 04:14:31.229986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.087 [2024-12-10 04:14:31.229997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.087 [2024-12-10 04:14:31.230172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.087 [2024-12-10 04:14:31.230361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.087 [2024-12-10 04:14:31.230371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.087 [2024-12-10 04:14:31.230378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.087 [2024-12-10 04:14:31.230385] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.087 [2024-12-10 04:14:31.242606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.087 [2024-12-10 04:14:31.242965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.087 [2024-12-10 04:14:31.242983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.087 [2024-12-10 04:14:31.242991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.087 [2024-12-10 04:14:31.243163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.087 [2024-12-10 04:14:31.243343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.087 [2024-12-10 04:14:31.243353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.087 [2024-12-10 04:14:31.243360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.087 [2024-12-10 04:14:31.243367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.087 [2024-12-10 04:14:31.255619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.087 [2024-12-10 04:14:31.256045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.087 [2024-12-10 04:14:31.256090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.087 [2024-12-10 04:14:31.256113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.087 [2024-12-10 04:14:31.256713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.087 [2024-12-10 04:14:31.257146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.087 [2024-12-10 04:14:31.257155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.087 [2024-12-10 04:14:31.257162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.087 [2024-12-10 04:14:31.257173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.087 [2024-12-10 04:14:31.268478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.087 [2024-12-10 04:14:31.268891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.087 [2024-12-10 04:14:31.268936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.087 [2024-12-10 04:14:31.268960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.087 [2024-12-10 04:14:31.269552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.087 [2024-12-10 04:14:31.270003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.087 [2024-12-10 04:14:31.270012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.087 [2024-12-10 04:14:31.270018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.087 [2024-12-10 04:14:31.270024] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.087 [2024-12-10 04:14:31.281318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.087 [2024-12-10 04:14:31.281729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.087 [2024-12-10 04:14:31.281770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.087 [2024-12-10 04:14:31.281795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.087 [2024-12-10 04:14:31.282392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.087 [2024-12-10 04:14:31.282914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.087 [2024-12-10 04:14:31.282924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.087 [2024-12-10 04:14:31.282931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.087 [2024-12-10 04:14:31.282937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.087 [2024-12-10 04:14:31.296181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.087 [2024-12-10 04:14:31.296711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.087 [2024-12-10 04:14:31.296757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.087 [2024-12-10 04:14:31.296779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.088 [2024-12-10 04:14:31.297280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.088 [2024-12-10 04:14:31.297537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.088 [2024-12-10 04:14:31.297550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.088 [2024-12-10 04:14:31.297560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.088 [2024-12-10 04:14:31.297570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.088 [2024-12-10 04:14:31.309118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.088 [2024-12-10 04:14:31.309520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.088 [2024-12-10 04:14:31.309538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.088 [2024-12-10 04:14:31.309546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.088 [2024-12-10 04:14:31.309714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.088 [2024-12-10 04:14:31.309883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.088 [2024-12-10 04:14:31.309892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.088 [2024-12-10 04:14:31.309899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.088 [2024-12-10 04:14:31.309908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.088 [2024-12-10 04:14:31.321946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.088 [2024-12-10 04:14:31.322369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.088 [2024-12-10 04:14:31.322388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.088 [2024-12-10 04:14:31.322395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.088 [2024-12-10 04:14:31.322555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.088 [2024-12-10 04:14:31.322715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.088 [2024-12-10 04:14:31.322724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.088 [2024-12-10 04:14:31.322730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.088 [2024-12-10 04:14:31.322737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.088 [2024-12-10 04:14:31.334842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.088 [2024-12-10 04:14:31.335268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.088 [2024-12-10 04:14:31.335316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.088 [2024-12-10 04:14:31.335339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.088 [2024-12-10 04:14:31.335844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.088 [2024-12-10 04:14:31.336005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.088 [2024-12-10 04:14:31.336014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.088 [2024-12-10 04:14:31.336020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.088 [2024-12-10 04:14:31.336026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.088 [2024-12-10 04:14:31.348046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.088 [2024-12-10 04:14:31.348403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.088 [2024-12-10 04:14:31.348422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.088 [2024-12-10 04:14:31.348430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.088 [2024-12-10 04:14:31.348603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.088 [2024-12-10 04:14:31.348776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.088 [2024-12-10 04:14:31.348786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.088 [2024-12-10 04:14:31.348793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.088 [2024-12-10 04:14:31.348800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.088 [2024-12-10 04:14:31.361151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.088 [2024-12-10 04:14:31.361574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.088 [2024-12-10 04:14:31.361591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.088 [2024-12-10 04:14:31.361599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.088 [2024-12-10 04:14:31.361767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.088 [2024-12-10 04:14:31.361936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.088 [2024-12-10 04:14:31.361945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.088 [2024-12-10 04:14:31.361951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.088 [2024-12-10 04:14:31.361958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.348 [2024-12-10 04:14:31.374276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.349 [2024-12-10 04:14:31.374697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.349 [2024-12-10 04:14:31.374740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.349 [2024-12-10 04:14:31.374766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.349 [2024-12-10 04:14:31.375313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.349 [2024-12-10 04:14:31.375484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.349 [2024-12-10 04:14:31.375494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.349 [2024-12-10 04:14:31.375501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.349 [2024-12-10 04:14:31.375507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.349 [2024-12-10 04:14:31.387105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.349 [2024-12-10 04:14:31.387497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.349 [2024-12-10 04:14:31.387515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.349 [2024-12-10 04:14:31.387523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.349 [2024-12-10 04:14:31.387681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.349 [2024-12-10 04:14:31.387841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.349 [2024-12-10 04:14:31.387850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.349 [2024-12-10 04:14:31.387856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.349 [2024-12-10 04:14:31.387863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.349 [2024-12-10 04:14:31.400054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.349 [2024-12-10 04:14:31.400446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.349 [2024-12-10 04:14:31.400464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.349 [2024-12-10 04:14:31.400472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.349 [2024-12-10 04:14:31.400635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.349 [2024-12-10 04:14:31.400796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.349 [2024-12-10 04:14:31.400805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.349 [2024-12-10 04:14:31.400812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.349 [2024-12-10 04:14:31.400818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.349 [2024-12-10 04:14:31.412810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.349 [2024-12-10 04:14:31.413152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.349 [2024-12-10 04:14:31.413174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.349 [2024-12-10 04:14:31.413182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.349 [2024-12-10 04:14:31.413341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.349 [2024-12-10 04:14:31.413501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.349 [2024-12-10 04:14:31.413510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.349 [2024-12-10 04:14:31.413516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.349 [2024-12-10 04:14:31.413523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.349 [2024-12-10 04:14:31.425562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.349 [2024-12-10 04:14:31.425951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.349 [2024-12-10 04:14:31.425967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.349 [2024-12-10 04:14:31.425975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.349 [2024-12-10 04:14:31.426133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.349 [2024-12-10 04:14:31.426321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.349 [2024-12-10 04:14:31.426331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.349 [2024-12-10 04:14:31.426339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.349 [2024-12-10 04:14:31.426345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.349 [2024-12-10 04:14:31.438318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.349 [2024-12-10 04:14:31.438727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.349 [2024-12-10 04:14:31.438744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.349 [2024-12-10 04:14:31.438752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.349 [2024-12-10 04:14:31.438911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.349 [2024-12-10 04:14:31.439070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.349 [2024-12-10 04:14:31.439082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.349 [2024-12-10 04:14:31.439089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.349 [2024-12-10 04:14:31.439097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.349 [2024-12-10 04:14:31.451144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.349 [2024-12-10 04:14:31.451556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.349 [2024-12-10 04:14:31.451596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.349 [2024-12-10 04:14:31.451622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.349 [2024-12-10 04:14:31.452141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.349 [2024-12-10 04:14:31.452307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.349 [2024-12-10 04:14:31.452316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.349 [2024-12-10 04:14:31.452322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.349 [2024-12-10 04:14:31.452327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.349 [2024-12-10 04:14:31.463994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.349 [2024-12-10 04:14:31.464446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.349 [2024-12-10 04:14:31.464493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.349 [2024-12-10 04:14:31.464517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.349 [2024-12-10 04:14:31.465070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.349 [2024-12-10 04:14:31.465238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.349 [2024-12-10 04:14:31.465248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.349 [2024-12-10 04:14:31.465254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.349 [2024-12-10 04:14:31.465260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.349 [2024-12-10 04:14:31.478847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.349 [2024-12-10 04:14:31.479401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.349 [2024-12-10 04:14:31.479448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.349 [2024-12-10 04:14:31.479471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.349 [2024-12-10 04:14:31.480053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.349 [2024-12-10 04:14:31.480559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.349 [2024-12-10 04:14:31.480573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.349 [2024-12-10 04:14:31.480583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.349 [2024-12-10 04:14:31.480597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.349 [2024-12-10 04:14:31.491938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.349 [2024-12-10 04:14:31.492280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.349 [2024-12-10 04:14:31.492299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.349 [2024-12-10 04:14:31.492307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.349 [2024-12-10 04:14:31.492480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.349 [2024-12-10 04:14:31.492654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.349 [2024-12-10 04:14:31.492663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.349 [2024-12-10 04:14:31.492670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.349 [2024-12-10 04:14:31.492677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.349 [2024-12-10 04:14:31.505042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.350 [2024-12-10 04:14:31.505454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.350 [2024-12-10 04:14:31.505472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.350 [2024-12-10 04:14:31.505479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.350 [2024-12-10 04:14:31.505653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.350 [2024-12-10 04:14:31.505826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.350 [2024-12-10 04:14:31.505836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.350 [2024-12-10 04:14:31.505843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.350 [2024-12-10 04:14:31.505850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.350 [2024-12-10 04:14:31.517881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.350 [2024-12-10 04:14:31.518228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.350 [2024-12-10 04:14:31.518246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.350 [2024-12-10 04:14:31.518254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.350 [2024-12-10 04:14:31.518423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.350 [2024-12-10 04:14:31.518592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.350 [2024-12-10 04:14:31.518602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.350 [2024-12-10 04:14:31.518608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.350 [2024-12-10 04:14:31.518614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.350 6015.60 IOPS, 23.50 MiB/s [2024-12-10T03:14:31.636Z] [2024-12-10 04:14:31.531950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.350 [2024-12-10 04:14:31.532321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.350 [2024-12-10 04:14:31.532339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.350 [2024-12-10 04:14:31.532347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.350 [2024-12-10 04:14:31.532506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.350 [2024-12-10 04:14:31.532666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.350 [2024-12-10 04:14:31.532675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.350 [2024-12-10 04:14:31.532681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.350 [2024-12-10 04:14:31.532688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.350 [2024-12-10 04:14:31.544848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.350 [2024-12-10 04:14:31.545249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.350 [2024-12-10 04:14:31.545266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.350 [2024-12-10 04:14:31.545275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.350 [2024-12-10 04:14:31.545433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.350 [2024-12-10 04:14:31.545593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.350 [2024-12-10 04:14:31.545602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.350 [2024-12-10 04:14:31.545609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.350 [2024-12-10 04:14:31.545614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.350 [2024-12-10 04:14:31.557985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.350 [2024-12-10 04:14:31.558298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.350 [2024-12-10 04:14:31.558316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.350 [2024-12-10 04:14:31.558324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.350 [2024-12-10 04:14:31.558497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.350 [2024-12-10 04:14:31.558671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.350 [2024-12-10 04:14:31.558681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.350 [2024-12-10 04:14:31.558687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.350 [2024-12-10 04:14:31.558694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.350 [2024-12-10 04:14:31.570929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.350 [2024-12-10 04:14:31.571265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.350 [2024-12-10 04:14:31.571284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.350 [2024-12-10 04:14:31.571291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.350 [2024-12-10 04:14:31.571466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.350 [2024-12-10 04:14:31.571637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.350 [2024-12-10 04:14:31.571647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.350 [2024-12-10 04:14:31.571653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.350 [2024-12-10 04:14:31.571661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.350 [2024-12-10 04:14:31.583861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.350 [2024-12-10 04:14:31.584119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.350 [2024-12-10 04:14:31.584136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.350 [2024-12-10 04:14:31.584144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.350 [2024-12-10 04:14:31.584310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.350 [2024-12-10 04:14:31.584470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.350 [2024-12-10 04:14:31.584480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.350 [2024-12-10 04:14:31.584486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.350 [2024-12-10 04:14:31.584492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.350 [2024-12-10 04:14:31.596840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.350 [2024-12-10 04:14:31.597205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.350 [2024-12-10 04:14:31.597251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.350 [2024-12-10 04:14:31.597274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.350 [2024-12-10 04:14:31.597790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.350 [2024-12-10 04:14:31.597952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.350 [2024-12-10 04:14:31.597961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.350 [2024-12-10 04:14:31.597968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.350 [2024-12-10 04:14:31.597975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.350 [2024-12-10 04:14:31.609740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.350 [2024-12-10 04:14:31.610162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.350 [2024-12-10 04:14:31.610184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.350 [2024-12-10 04:14:31.610192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.350 [2024-12-10 04:14:31.610360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.350 [2024-12-10 04:14:31.610529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.350 [2024-12-10 04:14:31.610542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.350 [2024-12-10 04:14:31.610548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.350 [2024-12-10 04:14:31.610556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.350 [2024-12-10 04:14:31.622517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.350 [2024-12-10 04:14:31.622985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.350 [2024-12-10 04:14:31.623030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.350 [2024-12-10 04:14:31.623053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.350 [2024-12-10 04:14:31.623579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.350 [2024-12-10 04:14:31.623750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.350 [2024-12-10 04:14:31.623760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.350 [2024-12-10 04:14:31.623767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.350 [2024-12-10 04:14:31.623774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.611 [2024-12-10 04:14:31.635415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.611 [2024-12-10 04:14:31.635775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.611 [2024-12-10 04:14:31.635793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.611 [2024-12-10 04:14:31.635800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.611 [2024-12-10 04:14:31.635973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.611 [2024-12-10 04:14:31.636148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.611 [2024-12-10 04:14:31.636157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.611 [2024-12-10 04:14:31.636164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.611 [2024-12-10 04:14:31.636177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.611 [2024-12-10 04:14:31.648254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.611 [2024-12-10 04:14:31.648542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.611 [2024-12-10 04:14:31.648588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.611 [2024-12-10 04:14:31.648612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.611 [2024-12-10 04:14:31.649207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.611 [2024-12-10 04:14:31.649795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.611 [2024-12-10 04:14:31.649823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.611 [2024-12-10 04:14:31.649830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.611 [2024-12-10 04:14:31.649839] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.611 [2024-12-10 04:14:31.661135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.611 [2024-12-10 04:14:31.661535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.611 [2024-12-10 04:14:31.661581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.611 [2024-12-10 04:14:31.661605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.611 [2024-12-10 04:14:31.662055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.611 [2024-12-10 04:14:31.662230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.611 [2024-12-10 04:14:31.662240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.611 [2024-12-10 04:14:31.662247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.611 [2024-12-10 04:14:31.662253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.611 [2024-12-10 04:14:31.674021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.611 [2024-12-10 04:14:31.674386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.611 [2024-12-10 04:14:31.674404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.611 [2024-12-10 04:14:31.674412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.611 [2024-12-10 04:14:31.674581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.611 [2024-12-10 04:14:31.674750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.611 [2024-12-10 04:14:31.674760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.611 [2024-12-10 04:14:31.674767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.611 [2024-12-10 04:14:31.674773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.611 [2024-12-10 04:14:31.686874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.611 [2024-12-10 04:14:31.687287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.611 [2024-12-10 04:14:31.687305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.611 [2024-12-10 04:14:31.687313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.611 [2024-12-10 04:14:31.687485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.611 [2024-12-10 04:14:31.687646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.612 [2024-12-10 04:14:31.687655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.612 [2024-12-10 04:14:31.687661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.612 [2024-12-10 04:14:31.687667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.612 [2024-12-10 04:14:31.699827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.612 [2024-12-10 04:14:31.700186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.612 [2024-12-10 04:14:31.700204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.612 [2024-12-10 04:14:31.700212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.612 [2024-12-10 04:14:31.700392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.612 [2024-12-10 04:14:31.700562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.612 [2024-12-10 04:14:31.700571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.612 [2024-12-10 04:14:31.700578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.612 [2024-12-10 04:14:31.700586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.612 [2024-12-10 04:14:31.712703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.612 [2024-12-10 04:14:31.713117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.612 [2024-12-10 04:14:31.713134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.612 [2024-12-10 04:14:31.713142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.612 [2024-12-10 04:14:31.713328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.612 [2024-12-10 04:14:31.713498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.612 [2024-12-10 04:14:31.713508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.612 [2024-12-10 04:14:31.713514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.612 [2024-12-10 04:14:31.713521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.612 [2024-12-10 04:14:31.725576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.612 [2024-12-10 04:14:31.726006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.612 [2024-12-10 04:14:31.726024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.612 [2024-12-10 04:14:31.726032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.612 [2024-12-10 04:14:31.726205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.612 [2024-12-10 04:14:31.726375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.612 [2024-12-10 04:14:31.726384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.612 [2024-12-10 04:14:31.726391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.612 [2024-12-10 04:14:31.726398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.612 [2024-12-10 04:14:31.738496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.612 [2024-12-10 04:14:31.738935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.612 [2024-12-10 04:14:31.738952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.612 [2024-12-10 04:14:31.738959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.612 [2024-12-10 04:14:31.739130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.612 [2024-12-10 04:14:31.739323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.612 [2024-12-10 04:14:31.739333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.612 [2024-12-10 04:14:31.739340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.612 [2024-12-10 04:14:31.739347] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.612 [2024-12-10 04:14:31.751558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.612 [2024-12-10 04:14:31.751974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.612 [2024-12-10 04:14:31.751992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.612 [2024-12-10 04:14:31.751999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.612 [2024-12-10 04:14:31.752179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.612 [2024-12-10 04:14:31.752353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.612 [2024-12-10 04:14:31.752363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.612 [2024-12-10 04:14:31.752370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.612 [2024-12-10 04:14:31.752377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.612 [2024-12-10 04:14:31.764629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.612 [2024-12-10 04:14:31.764982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.612 [2024-12-10 04:14:31.764999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.612 [2024-12-10 04:14:31.765007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.612 [2024-12-10 04:14:31.765180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.612 [2024-12-10 04:14:31.765349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.612 [2024-12-10 04:14:31.765359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.612 [2024-12-10 04:14:31.765366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.612 [2024-12-10 04:14:31.765372] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.612 [2024-12-10 04:14:31.777563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.612 [2024-12-10 04:14:31.778027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.612 [2024-12-10 04:14:31.778074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.612 [2024-12-10 04:14:31.778099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.612 [2024-12-10 04:14:31.778698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.612 [2024-12-10 04:14:31.779197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.612 [2024-12-10 04:14:31.779210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.612 [2024-12-10 04:14:31.779217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.612 [2024-12-10 04:14:31.779223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.612 [2024-12-10 04:14:31.790422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.612 [2024-12-10 04:14:31.790714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.612 [2024-12-10 04:14:31.790742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.612 [2024-12-10 04:14:31.790750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.612 [2024-12-10 04:14:31.790910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.612 [2024-12-10 04:14:31.791070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.612 [2024-12-10 04:14:31.791079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.612 [2024-12-10 04:14:31.791085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.612 [2024-12-10 04:14:31.791091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.612 [2024-12-10 04:14:31.803257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.612 [2024-12-10 04:14:31.803671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.612 [2024-12-10 04:14:31.803689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.612 [2024-12-10 04:14:31.803696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.612 [2024-12-10 04:14:31.803864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.612 [2024-12-10 04:14:31.804034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.612 [2024-12-10 04:14:31.804043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.612 [2024-12-10 04:14:31.804050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.612 [2024-12-10 04:14:31.804057] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.612 [2024-12-10 04:14:31.816001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.613 [2024-12-10 04:14:31.816338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.613 [2024-12-10 04:14:31.816355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.613 [2024-12-10 04:14:31.816362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.613 [2024-12-10 04:14:31.816531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.613 [2024-12-10 04:14:31.816699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.613 [2024-12-10 04:14:31.816709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.613 [2024-12-10 04:14:31.816715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.613 [2024-12-10 04:14:31.816725] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.613 [2024-12-10 04:14:31.828929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.613 [2024-12-10 04:14:31.829342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.613 [2024-12-10 04:14:31.829389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.613 [2024-12-10 04:14:31.829414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.613 [2024-12-10 04:14:31.829996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.613 [2024-12-10 04:14:31.830491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.613 [2024-12-10 04:14:31.830501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.613 [2024-12-10 04:14:31.830507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.613 [2024-12-10 04:14:31.830513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.613 [2024-12-10 04:14:31.841820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.613 [2024-12-10 04:14:31.842298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.613 [2024-12-10 04:14:31.842344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.613 [2024-12-10 04:14:31.842368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.613 [2024-12-10 04:14:31.842951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.613 [2024-12-10 04:14:31.843130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.613 [2024-12-10 04:14:31.843140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.613 [2024-12-10 04:14:31.843147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.613 [2024-12-10 04:14:31.843153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.613 [2024-12-10 04:14:31.854649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.613 [2024-12-10 04:14:31.855065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.613 [2024-12-10 04:14:31.855082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.613 [2024-12-10 04:14:31.855089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.613 [2024-12-10 04:14:31.855273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.613 [2024-12-10 04:14:31.855442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.613 [2024-12-10 04:14:31.855451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.613 [2024-12-10 04:14:31.855458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.613 [2024-12-10 04:14:31.855464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.613 [2024-12-10 04:14:31.867410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.613 [2024-12-10 04:14:31.867816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.613 [2024-12-10 04:14:31.867874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.613 [2024-12-10 04:14:31.867901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.613 [2024-12-10 04:14:31.868446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.613 [2024-12-10 04:14:31.868607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.613 [2024-12-10 04:14:31.868616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.613 [2024-12-10 04:14:31.868623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.613 [2024-12-10 04:14:31.868629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.613 [2024-12-10 04:14:31.880260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.613 [2024-12-10 04:14:31.880659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.613 [2024-12-10 04:14:31.880704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.613 [2024-12-10 04:14:31.880728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.613 [2024-12-10 04:14:31.881223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.613 [2024-12-10 04:14:31.881393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.613 [2024-12-10 04:14:31.881403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.613 [2024-12-10 04:14:31.881409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.613 [2024-12-10 04:14:31.881416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.873 [2024-12-10 04:14:31.893310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.873 [2024-12-10 04:14:31.893709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.873 [2024-12-10 04:14:31.893727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.873 [2024-12-10 04:14:31.893734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.873 [2024-12-10 04:14:31.893908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.873 [2024-12-10 04:14:31.894083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.873 [2024-12-10 04:14:31.894092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.873 [2024-12-10 04:14:31.894099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.873 [2024-12-10 04:14:31.894106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.873 [2024-12-10 04:14:31.906253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.874 [2024-12-10 04:14:31.906606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.874 [2024-12-10 04:14:31.906648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.874 [2024-12-10 04:14:31.906673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.874 [2024-12-10 04:14:31.907210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.874 [2024-12-10 04:14:31.907372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.874 [2024-12-10 04:14:31.907381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.874 [2024-12-10 04:14:31.907388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.874 [2024-12-10 04:14:31.907394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.874 [2024-12-10 04:14:31.919188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.874 [2024-12-10 04:14:31.919459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.874 [2024-12-10 04:14:31.919475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.874 [2024-12-10 04:14:31.919484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.874 [2024-12-10 04:14:31.919643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.874 [2024-12-10 04:14:31.919803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.874 [2024-12-10 04:14:31.919812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.874 [2024-12-10 04:14:31.919819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.874 [2024-12-10 04:14:31.919826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.874 [2024-12-10 04:14:31.932008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.874 [2024-12-10 04:14:31.932351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.874 [2024-12-10 04:14:31.932368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.874 [2024-12-10 04:14:31.932376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.874 [2024-12-10 04:14:31.932535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.874 [2024-12-10 04:14:31.932694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.874 [2024-12-10 04:14:31.932704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.874 [2024-12-10 04:14:31.932711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.874 [2024-12-10 04:14:31.932718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.874 [2024-12-10 04:14:31.944865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.874 [2024-12-10 04:14:31.945213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.874 [2024-12-10 04:14:31.945230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.874 [2024-12-10 04:14:31.945237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.874 [2024-12-10 04:14:31.945396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.874 [2024-12-10 04:14:31.945555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.874 [2024-12-10 04:14:31.945567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.874 [2024-12-10 04:14:31.945574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.874 [2024-12-10 04:14:31.945580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.874 [2024-12-10 04:14:31.957793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.874 [2024-12-10 04:14:31.958137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.874 [2024-12-10 04:14:31.958154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.874 [2024-12-10 04:14:31.958161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.874 [2024-12-10 04:14:31.958328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.874 [2024-12-10 04:14:31.958489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.874 [2024-12-10 04:14:31.958498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.874 [2024-12-10 04:14:31.958504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.874 [2024-12-10 04:14:31.958510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.874 [2024-12-10 04:14:31.970661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.874 [2024-12-10 04:14:31.971008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.874 [2024-12-10 04:14:31.971025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.874 [2024-12-10 04:14:31.971032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.874 [2024-12-10 04:14:31.971196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.874 [2024-12-10 04:14:31.971356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.874 [2024-12-10 04:14:31.971365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.874 [2024-12-10 04:14:31.971371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.874 [2024-12-10 04:14:31.971377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.874 [2024-12-10 04:14:31.983432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.874 [2024-12-10 04:14:31.983771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.874 [2024-12-10 04:14:31.983789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.874 [2024-12-10 04:14:31.983796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.874 [2024-12-10 04:14:31.983956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.874 [2024-12-10 04:14:31.984115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.874 [2024-12-10 04:14:31.984124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.874 [2024-12-10 04:14:31.984131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.874 [2024-12-10 04:14:31.984137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.874 [2024-12-10 04:14:31.996365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.874 [2024-12-10 04:14:31.996718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.874 [2024-12-10 04:14:31.996736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.874 [2024-12-10 04:14:31.996745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.874 [2024-12-10 04:14:31.996918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.874 [2024-12-10 04:14:31.997092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.874 [2024-12-10 04:14:31.997102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.874 [2024-12-10 04:14:31.997109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.874 [2024-12-10 04:14:31.997115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.874 [2024-12-10 04:14:32.009344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.874 [2024-12-10 04:14:32.009796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.874 [2024-12-10 04:14:32.009814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.874 [2024-12-10 04:14:32.009822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.874 [2024-12-10 04:14:32.009995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.874 [2024-12-10 04:14:32.010176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.874 [2024-12-10 04:14:32.010187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.874 [2024-12-10 04:14:32.010194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.874 [2024-12-10 04:14:32.010200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.874 [2024-12-10 04:14:32.022276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.874 [2024-12-10 04:14:32.022633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.874 [2024-12-10 04:14:32.022650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.874 [2024-12-10 04:14:32.022658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.874 [2024-12-10 04:14:32.022827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.874 [2024-12-10 04:14:32.022996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.874 [2024-12-10 04:14:32.023005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.874 [2024-12-10 04:14:32.023011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.874 [2024-12-10 04:14:32.023018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.874 [2024-12-10 04:14:32.035090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.875 [2024-12-10 04:14:32.035521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-12-10 04:14:32.035575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-12-10 04:14:32.035600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.875 [2024-12-10 04:14:32.036000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.875 [2024-12-10 04:14:32.036161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.875 [2024-12-10 04:14:32.036176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.875 [2024-12-10 04:14:32.036182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.875 [2024-12-10 04:14:32.036188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.875 [2024-12-10 04:14:32.048072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.875 [2024-12-10 04:14:32.048538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-12-10 04:14:32.048555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-12-10 04:14:32.048563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.875 [2024-12-10 04:14:32.048730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.875 [2024-12-10 04:14:32.048898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.875 [2024-12-10 04:14:32.048907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.875 [2024-12-10 04:14:32.048914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.875 [2024-12-10 04:14:32.048921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.875 [2024-12-10 04:14:32.060860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.875 [2024-12-10 04:14:32.061253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-12-10 04:14:32.061271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-12-10 04:14:32.061278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.875 [2024-12-10 04:14:32.061437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.875 [2024-12-10 04:14:32.061598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.875 [2024-12-10 04:14:32.061607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.875 [2024-12-10 04:14:32.061613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.875 [2024-12-10 04:14:32.061619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.875 [2024-12-10 04:14:32.073669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.875 [2024-12-10 04:14:32.074062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-12-10 04:14:32.074079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-12-10 04:14:32.074086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.875 [2024-12-10 04:14:32.074273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.875 [2024-12-10 04:14:32.074442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.875 [2024-12-10 04:14:32.074452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.875 [2024-12-10 04:14:32.074458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.875 [2024-12-10 04:14:32.074465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.875 [2024-12-10 04:14:32.086436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.875 [2024-12-10 04:14:32.086763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-12-10 04:14:32.086780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-12-10 04:14:32.086787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.875 [2024-12-10 04:14:32.086945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.875 [2024-12-10 04:14:32.087106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.875 [2024-12-10 04:14:32.087115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.875 [2024-12-10 04:14:32.087121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.875 [2024-12-10 04:14:32.087127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.875 [2024-12-10 04:14:32.099196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.875 [2024-12-10 04:14:32.099617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-12-10 04:14:32.099661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-12-10 04:14:32.099684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.875 [2024-12-10 04:14:32.100112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.875 [2024-12-10 04:14:32.100306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.875 [2024-12-10 04:14:32.100316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.875 [2024-12-10 04:14:32.100323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.875 [2024-12-10 04:14:32.100329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.875 [2024-12-10 04:14:32.112032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.875 [2024-12-10 04:14:32.112446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-12-10 04:14:32.112493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-12-10 04:14:32.112518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.875 [2024-12-10 04:14:32.113008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.875 [2024-12-10 04:14:32.113174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.875 [2024-12-10 04:14:32.113185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.875 [2024-12-10 04:14:32.113211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.875 [2024-12-10 04:14:32.113219] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.875 [2024-12-10 04:14:32.124900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.875 [2024-12-10 04:14:32.125295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-12-10 04:14:32.125313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-12-10 04:14:32.125321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.875 [2024-12-10 04:14:32.125481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.875 [2024-12-10 04:14:32.125641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.875 [2024-12-10 04:14:32.125649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.875 [2024-12-10 04:14:32.125656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.875 [2024-12-10 04:14:32.125662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.875 [2024-12-10 04:14:32.137625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.875 [2024-12-10 04:14:32.138038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-12-10 04:14:32.138084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-12-10 04:14:32.138108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.875 [2024-12-10 04:14:32.138705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.875 [2024-12-10 04:14:32.139197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.875 [2024-12-10 04:14:32.139207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.875 [2024-12-10 04:14:32.139214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.875 [2024-12-10 04:14:32.139221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.875 [2024-12-10 04:14:32.150485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.875 [2024-12-10 04:14:32.150923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-12-10 04:14:32.150967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-12-10 04:14:32.150991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:32.875 [2024-12-10 04:14:32.151368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:32.875 [2024-12-10 04:14:32.151545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.875 [2024-12-10 04:14:32.151555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.875 [2024-12-10 04:14:32.151562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.875 [2024-12-10 04:14:32.151568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.136 [2024-12-10 04:14:32.163462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.136 [2024-12-10 04:14:32.163882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-12-10 04:14:32.163899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-12-10 04:14:32.163907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.136 [2024-12-10 04:14:32.164076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.136 [2024-12-10 04:14:32.164268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.136 [2024-12-10 04:14:32.164278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.136 [2024-12-10 04:14:32.164287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.136 [2024-12-10 04:14:32.164294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 209467 Killed "${NVMF_APP[@]}" "$@" 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=210837 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 210837 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 210837 ']' 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.136 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.136 [2024-12-10 04:14:32.176436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.136 [2024-12-10 04:14:32.176760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-12-10 04:14:32.176777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-12-10 04:14:32.176785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.136 [2024-12-10 04:14:32.176959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.136 [2024-12-10 04:14:32.177133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.136 [2024-12-10 04:14:32.177142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.136 [2024-12-10 04:14:32.177149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.136 [2024-12-10 04:14:32.177161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.136 [2024-12-10 04:14:32.189540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.136 [2024-12-10 04:14:32.189968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-12-10 04:14:32.189985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-12-10 04:14:32.189993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.137 [2024-12-10 04:14:32.190170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.137 [2024-12-10 04:14:32.190344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.137 [2024-12-10 04:14:32.190354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.137 [2024-12-10 04:14:32.190361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.137 [2024-12-10 04:14:32.190367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.137 [2024-12-10 04:14:32.202570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.137 [2024-12-10 04:14:32.202999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-12-10 04:14:32.203017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-12-10 04:14:32.203024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.137 [2024-12-10 04:14:32.203203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.137 [2024-12-10 04:14:32.203378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.137 [2024-12-10 04:14:32.203387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.137 [2024-12-10 04:14:32.203394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.137 [2024-12-10 04:14:32.203401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.137 [2024-12-10 04:14:32.215649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.137 [2024-12-10 04:14:32.216070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-12-10 04:14:32.216087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-12-10 04:14:32.216095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.137 [2024-12-10 04:14:32.216267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.137 [2024-12-10 04:14:32.216437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.137 [2024-12-10 04:14:32.216447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.137 [2024-12-10 04:14:32.216454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.137 [2024-12-10 04:14:32.216460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.137 [2024-12-10 04:14:32.225510] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:33.137 [2024-12-10 04:14:32.225550] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.137 [2024-12-10 04:14:32.228674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.137 [2024-12-10 04:14:32.229017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-12-10 04:14:32.229035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-12-10 04:14:32.229042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.137 [2024-12-10 04:14:32.229215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.137 [2024-12-10 04:14:32.229386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.137 [2024-12-10 04:14:32.229396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.137 [2024-12-10 04:14:32.229403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.137 [2024-12-10 04:14:32.229411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.137 [2024-12-10 04:14:32.241737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.137 [2024-12-10 04:14:32.242144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-12-10 04:14:32.242162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-12-10 04:14:32.242174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.137 [2024-12-10 04:14:32.242363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.137 [2024-12-10 04:14:32.242538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.137 [2024-12-10 04:14:32.242547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.137 [2024-12-10 04:14:32.242554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.137 [2024-12-10 04:14:32.242562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.137 [2024-12-10 04:14:32.254654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.137 [2024-12-10 04:14:32.255084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-12-10 04:14:32.255102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-12-10 04:14:32.255110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.137 [2024-12-10 04:14:32.255289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.137 [2024-12-10 04:14:32.255464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.137 [2024-12-10 04:14:32.255474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.137 [2024-12-10 04:14:32.255480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.137 [2024-12-10 04:14:32.255488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.137 [2024-12-10 04:14:32.267697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.137 [2024-12-10 04:14:32.268051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-12-10 04:14:32.268069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-12-10 04:14:32.268077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.137 [2024-12-10 04:14:32.268255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.137 [2024-12-10 04:14:32.268429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.137 [2024-12-10 04:14:32.268438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.137 [2024-12-10 04:14:32.268445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.137 [2024-12-10 04:14:32.268452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.137 [2024-12-10 04:14:32.280815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.137 [2024-12-10 04:14:32.281244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-12-10 04:14:32.281262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-12-10 04:14:32.281270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.137 [2024-12-10 04:14:32.281443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.137 [2024-12-10 04:14:32.281618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.137 [2024-12-10 04:14:32.281628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.137 [2024-12-10 04:14:32.281635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.137 [2024-12-10 04:14:32.281641] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.137 [2024-12-10 04:14:32.293861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.137 [2024-12-10 04:14:32.294288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-12-10 04:14:32.294307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-12-10 04:14:32.294315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.137 [2024-12-10 04:14:32.294488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.137 [2024-12-10 04:14:32.294663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.137 [2024-12-10 04:14:32.294673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.137 [2024-12-10 04:14:32.294679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.137 [2024-12-10 04:14:32.294686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.137 [2024-12-10 04:14:32.304490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:33.137 [2024-12-10 04:14:32.306915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.137 [2024-12-10 04:14:32.307268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-12-10 04:14:32.307287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-12-10 04:14:32.307299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.137 [2024-12-10 04:14:32.307474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.137 [2024-12-10 04:14:32.307649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.137 [2024-12-10 04:14:32.307658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.137 [2024-12-10 04:14:32.307665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.137 [2024-12-10 04:14:32.307672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.137 [2024-12-10 04:14:32.319897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.138 [2024-12-10 04:14:32.320350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-12-10 04:14:32.320373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.138 [2024-12-10 04:14:32.320381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.138 [2024-12-10 04:14:32.320563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.138 [2024-12-10 04:14:32.320734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.138 [2024-12-10 04:14:32.320743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.138 [2024-12-10 04:14:32.320751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.138 [2024-12-10 04:14:32.320758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.138 [2024-12-10 04:14:32.332964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.138 [2024-12-10 04:14:32.333393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-12-10 04:14:32.333411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.138 [2024-12-10 04:14:32.333419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.138 [2024-12-10 04:14:32.333588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.138 [2024-12-10 04:14:32.333757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.138 [2024-12-10 04:14:32.333767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.138 [2024-12-10 04:14:32.333773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.138 [2024-12-10 04:14:32.333780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.138 [2024-12-10 04:14:32.345486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.138 [2024-12-10 04:14:32.345511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.138 [2024-12-10 04:14:32.345518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.138 [2024-12-10 04:14:32.345524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.138 [2024-12-10 04:14:32.345530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.138 [2024-12-10 04:14:32.346001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.138 [2024-12-10 04:14:32.346439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-12-10 04:14:32.346457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.138 [2024-12-10 04:14:32.346466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.138 [2024-12-10 04:14:32.346634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.138 [2024-12-10 04:14:32.346805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.138 [2024-12-10 04:14:32.346814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.138 [2024-12-10 04:14:32.346821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.138 [2024-12-10 04:14:32.346844] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.138 [2024-12-10 04:14:32.346785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.138 [2024-12-10 04:14:32.346894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.138 [2024-12-10 04:14:32.346895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.138 [2024-12-10 04:14:32.359060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.138 [2024-12-10 04:14:32.359518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-12-10 04:14:32.359540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.138 [2024-12-10 04:14:32.359549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.138 [2024-12-10 04:14:32.359725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.138 [2024-12-10 04:14:32.359902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.138 [2024-12-10 04:14:32.359912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.138 [2024-12-10 04:14:32.359920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.138 [2024-12-10 04:14:32.359927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.138 [2024-12-10 04:14:32.372138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.138 [2024-12-10 04:14:32.372539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-12-10 04:14:32.372561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.138 [2024-12-10 04:14:32.372570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.138 [2024-12-10 04:14:32.372745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.138 [2024-12-10 04:14:32.372921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.138 [2024-12-10 04:14:32.372932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.138 [2024-12-10 04:14:32.372940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.138 [2024-12-10 04:14:32.372947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.138 [2024-12-10 04:14:32.385161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.138 [2024-12-10 04:14:32.385549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-12-10 04:14:32.385570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.138 [2024-12-10 04:14:32.385579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.138 [2024-12-10 04:14:32.385754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.138 [2024-12-10 04:14:32.385931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.138 [2024-12-10 04:14:32.385941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.138 [2024-12-10 04:14:32.385948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.138 [2024-12-10 04:14:32.385955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.138 [2024-12-10 04:14:32.398181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.138 [2024-12-10 04:14:32.398614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-12-10 04:14:32.398636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.138 [2024-12-10 04:14:32.398645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.138 [2024-12-10 04:14:32.398819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.138 [2024-12-10 04:14:32.398996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.138 [2024-12-10 04:14:32.399006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.138 [2024-12-10 04:14:32.399014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.138 [2024-12-10 04:14:32.399021] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.138 [2024-12-10 04:14:32.411235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.138 [2024-12-10 04:14:32.411672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.138 [2024-12-10 04:14:32.411691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.138 [2024-12-10 04:14:32.411699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.138 [2024-12-10 04:14:32.411874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.138 [2024-12-10 04:14:32.412048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.138 [2024-12-10 04:14:32.412057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.138 [2024-12-10 04:14:32.412065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.138 [2024-12-10 04:14:32.412072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.398 [2024-12-10 04:14:32.424312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.398 [2024-12-10 04:14:32.424749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.398 [2024-12-10 04:14:32.424768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.398 [2024-12-10 04:14:32.424782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.398 [2024-12-10 04:14:32.424956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.398 [2024-12-10 04:14:32.425131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.398 [2024-12-10 04:14:32.425141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.398 [2024-12-10 04:14:32.425149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.398 [2024-12-10 04:14:32.425156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.398 [2024-12-10 04:14:32.437360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.398 [2024-12-10 04:14:32.437723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.398 [2024-12-10 04:14:32.437740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.398 [2024-12-10 04:14:32.437748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.398 [2024-12-10 04:14:32.437921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.398 [2024-12-10 04:14:32.438095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.398 [2024-12-10 04:14:32.438104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.398 [2024-12-10 04:14:32.438111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.398 [2024-12-10 04:14:32.438118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.398 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.398 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:33.398 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:33.398 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:33.398 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.398 [2024-12-10 04:14:32.450331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.398 [2024-12-10 04:14:32.450669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.398 [2024-12-10 04:14:32.450687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.398 [2024-12-10 04:14:32.450694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.398 [2024-12-10 04:14:32.450868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.398 [2024-12-10 04:14:32.451042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.399 [2024-12-10 04:14:32.451051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.399 [2024-12-10 04:14:32.451059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.399 [2024-12-10 04:14:32.451066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.399 [2024-12-10 04:14:32.463438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.399 [2024-12-10 04:14:32.463722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-12-10 04:14:32.463741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-12-10 04:14:32.463754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.399 [2024-12-10 04:14:32.463926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.399 [2024-12-10 04:14:32.464101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.399 [2024-12-10 04:14:32.464111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.399 [2024-12-10 04:14:32.464118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.399 [2024-12-10 04:14:32.464125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.399 [2024-12-10 04:14:32.476518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.399 [2024-12-10 04:14:32.476895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-12-10 04:14:32.476912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-12-10 04:14:32.476920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.399 [2024-12-10 04:14:32.477094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.399 [2024-12-10 04:14:32.477274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.399 [2024-12-10 04:14:32.477284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.399 [2024-12-10 04:14:32.477291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.399 [2024-12-10 04:14:32.477300] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.399 [2024-12-10 04:14:32.482405] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.399 [2024-12-10 04:14:32.489550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.399 [2024-12-10 04:14:32.489882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-12-10 04:14:32.489901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-12-10 04:14:32.489909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.399 [2024-12-10 04:14:32.490083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.399 [2024-12-10 04:14:32.490261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.399 [2024-12-10 04:14:32.490271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.399 [2024-12-10 04:14:32.490282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.399 [2024-12-10 04:14:32.490290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.399 [2024-12-10 04:14:32.502656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.399 [2024-12-10 04:14:32.503039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-12-10 04:14:32.503057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-12-10 04:14:32.503065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.399 [2024-12-10 04:14:32.503244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.399 [2024-12-10 04:14:32.503419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.399 [2024-12-10 04:14:32.503428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.399 [2024-12-10 04:14:32.503435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.399 [2024-12-10 04:14:32.503441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.399 [2024-12-10 04:14:32.515658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.399 Malloc0 00:27:33.399 [2024-12-10 04:14:32.516033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-12-10 04:14:32.516051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-12-10 04:14:32.516059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.399 [2024-12-10 04:14:32.516237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.399 [2024-12-10 04:14:32.516412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.399 [2024-12-10 04:14:32.516422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.399 [2024-12-10 04:14:32.516429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.399 [2024-12-10 04:14:32.516436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:33.399 [2024-12-10 04:14:32.528637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.399 [2024-12-10 04:14:32.528994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-12-10 04:14:32.529011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa97e0 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-12-10 04:14:32.529023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa97e0 is same with the state(6) to be set 00:27:33.399 [2024-12-10 04:14:32.529202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa97e0 (9): Bad file descriptor 00:27:33.399 [2024-12-10 04:14:32.529377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.399 [2024-12-10 04:14:32.529386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.399 [2024-12-10 04:14:32.529394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.399 [2024-12-10 04:14:32.529401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.399 5013.00 IOPS, 19.58 MiB/s [2024-12-10T03:14:32.685Z] 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.399 [2024-12-10 04:14:32.539556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.399 [2024-12-10 04:14:32.541711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.399 04:14:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 209886 00:27:33.399 [2024-12-10 04:14:32.566393] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:35.270 5858.71 IOPS, 22.89 MiB/s [2024-12-10T03:14:35.932Z] 6542.50 IOPS, 25.56 MiB/s [2024-12-10T03:14:36.868Z] 7091.44 IOPS, 27.70 MiB/s [2024-12-10T03:14:37.804Z] 7522.30 IOPS, 29.38 MiB/s [2024-12-10T03:14:38.739Z] 7866.36 IOPS, 30.73 MiB/s [2024-12-10T03:14:39.674Z] 8167.67 IOPS, 31.90 MiB/s [2024-12-10T03:14:40.609Z] 8426.00 IOPS, 32.91 MiB/s [2024-12-10T03:14:41.983Z] 8621.57 IOPS, 33.68 MiB/s [2024-12-10T03:14:41.983Z] 8815.40 IOPS, 34.44 MiB/s 00:27:42.697 Latency(us) 00:27:42.697 [2024-12-10T03:14:41.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.697 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:42.697 Verification LBA range: start 0x0 length 0x4000 00:27:42.697 Nvme1n1 : 15.05 8796.03 34.36 10864.28 0.00 6473.53 425.20 41443.72 00:27:42.697 [2024-12-10T03:14:41.983Z] =================================================================================================================== 00:27:42.697 [2024-12-10T03:14:41.983Z] Total : 8796.03 34.36 10864.28 0.00 6473.53 425.20 41443.72 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:42.698 rmmod nvme_tcp 00:27:42.698 rmmod nvme_fabrics 00:27:42.698 rmmod nvme_keyring 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 210837 ']' 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 210837 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 210837 ']' 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 210837 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 210837 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 210837' 00:27:42.698 killing process with pid 210837 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 210837 00:27:42.698 04:14:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 210837 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.956 04:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:45.488 00:27:45.488 real 0m26.109s 00:27:45.488 user 1m1.109s 00:27:45.488 sys 0m6.713s 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:45.488 ************************************ 00:27:45.488 END TEST nvmf_bdevperf 00:27:45.488 ************************************ 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.488 ************************************ 00:27:45.488 START TEST nvmf_target_disconnect 00:27:45.488 ************************************ 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:45.488 * Looking for test storage... 00:27:45.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:45.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.488 --rc genhtml_branch_coverage=1 00:27:45.488 --rc genhtml_function_coverage=1 00:27:45.488 --rc genhtml_legend=1 00:27:45.488 --rc geninfo_all_blocks=1 00:27:45.488 --rc geninfo_unexecuted_blocks=1 00:27:45.488 00:27:45.488 ' 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:45.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.488 --rc genhtml_branch_coverage=1 00:27:45.488 --rc genhtml_function_coverage=1 00:27:45.488 --rc genhtml_legend=1 00:27:45.488 --rc geninfo_all_blocks=1 00:27:45.488 --rc geninfo_unexecuted_blocks=1 00:27:45.488 00:27:45.488 ' 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:45.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.488 --rc genhtml_branch_coverage=1 00:27:45.488 --rc genhtml_function_coverage=1 00:27:45.488 --rc genhtml_legend=1 00:27:45.488 --rc geninfo_all_blocks=1 00:27:45.488 --rc geninfo_unexecuted_blocks=1 00:27:45.488 00:27:45.488 ' 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:45.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.488 --rc genhtml_branch_coverage=1 00:27:45.488 --rc genhtml_function_coverage=1 00:27:45.488 --rc genhtml_legend=1 00:27:45.488 --rc geninfo_all_blocks=1 00:27:45.488 --rc geninfo_unexecuted_blocks=1 00:27:45.488 00:27:45.488 ' 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.488 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:45.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.489 04:14:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:50.764 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:50.764 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:50.764 Found net devices under 0000:af:00.0: cvl_0_0 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:50.764 Found net devices under 0000:af:00.1: cvl_0_1 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.764 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:51.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:27:51.024 00:27:51.024 --- 10.0.0.2 ping statistics --- 00:27:51.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.024 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:27:51.024 00:27:51.024 --- 10.0.0.1 ping statistics --- 00:27:51.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.024 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:51.024 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:51.283 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:51.284 ************************************ 00:27:51.284 START TEST nvmf_target_disconnect_tc1 00:27:51.284 ************************************ 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:51.284 [2024-12-10 04:14:50.460596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.284 [2024-12-10 04:14:50.460642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9e0b0 with addr=10.0.0.2, port=4420 00:27:51.284 [2024-12-10 04:14:50.460664] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:51.284 [2024-12-10 04:14:50.460675] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:51.284 [2024-12-10 04:14:50.460682] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:51.284 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:51.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:51.284 Initializing NVMe Controllers 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:51.284 00:27:51.284 real 0m0.121s 00:27:51.284 user 0m0.052s 00:27:51.284 sys 0m0.069s 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:51.284 ************************************ 00:27:51.284 END TEST nvmf_target_disconnect_tc1 00:27:51.284 ************************************ 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:51.284 ************************************ 00:27:51.284 START TEST nvmf_target_disconnect_tc2 00:27:51.284 ************************************ 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=215909 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 215909 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 215909 ']' 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.284 04:14:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.543 [2024-12-10 04:14:50.598585] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:51.543 [2024-12-10 04:14:50.598628] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.543 [2024-12-10 04:14:50.676907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.543 [2024-12-10 04:14:50.717418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.543 [2024-12-10 04:14:50.717457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.543 [2024-12-10 04:14:50.717464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.543 [2024-12-10 04:14:50.717471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.543 [2024-12-10 04:14:50.717478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.543 [2024-12-10 04:14:50.718993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:51.543 [2024-12-10 04:14:50.719100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:51.543 [2024-12-10 04:14:50.719208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:51.543 [2024-12-10 04:14:50.719209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.480 Malloc0 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.480 [2024-12-10 04:14:51.512621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.480 [2024-12-10 04:14:51.544893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=216054 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:52.480 04:14:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:54.392 04:14:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 215909 00:27:54.392 04:14:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 [2024-12-10 04:14:53.581593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 [2024-12-10 04:14:53.581791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Write completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.392 Read completed with error (sct=0, sc=8) 00:27:54.392 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 [2024-12-10 04:14:53.581993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Write completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 Read completed with error (sct=0, sc=8) 00:27:54.393 starting I/O failed 00:27:54.393 [2024-12-10 04:14:53.582194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.393 [2024-12-10 04:14:53.582379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.582404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.582576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.582591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.582756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.582788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.582981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.583013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.583139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.583182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.583336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.583349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.583496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.583507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.583665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.583677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.583805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.583817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.583881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.583891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.584019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.584030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.584115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.584125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.584227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.584238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.584383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.584395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.584490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.584531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.584745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.584777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.584980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.585012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.585128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.585140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.585228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.585239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.585377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.585409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.585584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.585618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.585798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.585830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.585969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.586012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.586080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.393 [2024-12-10 04:14:53.586090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.393 qpair failed and we were unable to recover it. 00:27:54.393 [2024-12-10 04:14:53.586225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.586236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.586373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.586385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.586453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.586463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.586597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.586609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.586667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.586678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.586820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.586830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.586900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.586912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.587054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.587067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.587198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.587210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.587302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.587312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.587535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.587569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.587706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.587738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.587857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.587890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.588086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.588120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.588271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.588305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.588428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.588461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.588704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.588737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.588980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.589013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.589203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.589237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.589422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.589455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.589694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.589727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.589861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.589894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.590084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.590117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.590300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.590335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.590514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.590547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.590726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.590758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.590883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.590916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.591105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.591138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.591358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.591411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.591641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.591704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.591917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.591952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.592127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.592160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.592363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.592396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.592521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.592554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.592748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.592781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.592973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.593005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.593199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.593233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.593414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.593446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.394 [2024-12-10 04:14:53.593583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.394 [2024-12-10 04:14:53.593616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.394 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.593887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.593920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.594120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.594152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.594380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.594414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.594593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.594626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.594873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.594906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.595031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.595064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.595208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.595243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.595376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.595409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.595520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.595559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.595678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.595711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.595826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.595858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.596118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.596150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.596416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.596452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.596555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.596588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.596776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.596809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.596934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.596966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.597228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.597262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.597439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.597472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.597598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.597630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.597836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.597869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.597981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.598013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.598188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.598221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.598478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.598511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.598693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.598726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.598840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.598873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.599115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.599148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.599293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.599326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.599537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.599570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.599812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.599845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.599967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.600000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.600298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.600333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.600543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.600576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.600745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.600778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.601041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.601073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.601278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.601312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.601465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.601498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.601622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.601654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.601757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.601790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.395 [2024-12-10 04:14:53.601970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.395 [2024-12-10 04:14:53.602002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.395 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.602207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.602241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.602502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.602535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.602667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.602699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.602871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.602904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.603074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.603107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.603245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.603279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.603471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.603504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.603630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.603663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.603766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.603799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.603905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.603944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.604207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.604243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.604431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.604464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.604658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.604690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.604863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.604896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.605185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.605219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.605458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.605491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.605623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.605655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.605832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.605864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.606056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.606088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.606347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.606380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.606562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.606595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.606859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.606891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.607083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.607115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.607252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.607286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.607393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.607427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.607615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.607647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.607840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.607872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.608084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.608117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.608350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.608384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.608603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.608636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.608912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.608945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.396 [2024-12-10 04:14:53.609119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.396 [2024-12-10 04:14:53.609151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.396 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.609364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.609399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.609537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.609569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.609770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.609802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.609979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.610012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.610218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.610253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.610442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.610475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.610644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.610677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.610862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.610894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.611085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.611118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.611318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.611351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.611484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.611517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.611728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.611761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.611964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.611996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.612184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.612218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.612338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.612371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.612633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.612666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.612788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.612820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.612995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.613034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.613221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.613255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.613459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.613491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.613761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.613795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.613966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.613998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.614208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.614242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.614480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.614514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.614767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.614799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.614980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.615013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.615130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.615163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.615452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.615485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.615679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.615712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.615891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.615924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.616105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.616138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.616451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.616486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.616610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.616643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.616881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.616913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.617031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.617063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.617329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.617363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.617493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.617525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.617641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.397 [2024-12-10 04:14:53.617673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.397 qpair failed and we were unable to recover it. 00:27:54.397 [2024-12-10 04:14:53.617780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.617813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.617947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.617979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.618190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.618225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.618402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.618434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.618648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.618680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.618873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.618906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.619103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.619136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.619342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.619377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.619630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.619663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.619942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.619974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.620195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.620230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.620348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.620381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.620552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.620585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.620857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.620889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.621129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.621161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.621419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.621451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.621629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.621662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.621789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.621821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.622072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.622104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.622289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.622333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.622516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.622548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.622807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.622840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.623129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.623162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.623358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.623390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.623567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.623600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.623787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.623819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.624002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.624035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.624282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.624316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.624424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.624456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.624648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.624680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.624853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.624885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.625056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.625089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.625335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.625369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.625588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.625621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.625877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.625910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.626102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.626134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.626249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.626283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.626452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.626485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.626667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.626699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.398 qpair failed and we were unable to recover it. 00:27:54.398 [2024-12-10 04:14:53.626881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.398 [2024-12-10 04:14:53.626913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.627017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.627050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.627313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.627347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.627465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.627498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.627737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.627769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.627889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.627922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.628028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.628061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.628248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.628284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.628540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.628573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.628761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.628794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.629029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.629062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.629312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.629347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.629609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.629642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.629760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.629792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.630013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.630046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.630234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.630268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.630386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.630418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.630602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.630635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.630744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.630777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.630965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.630997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.631191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.631231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.631416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.631448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.631695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.631728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.631862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.631894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.632103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.632136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.632322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.632396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.632611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.632648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.632825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.632859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.632999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.633032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.633154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.633206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.633392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.633426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.633614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.633647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.633908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.633941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.634128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.634162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.634370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.634404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.634674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.634707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.634899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.634933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.635120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.635153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.635305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.635338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.399 [2024-12-10 04:14:53.635541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.399 [2024-12-10 04:14:53.635575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.399 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.635779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.635812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.635996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.636028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.636216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.636251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.636388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.636421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.636613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.636647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.636750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.636783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.637023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.637056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.637181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.637221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.637484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.637516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.637627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.637659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.637844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.637877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.638011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.638044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.638157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.638199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.638376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.638411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.638670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.638702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.638839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.638872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.639055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.639088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.639198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.639232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.639411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.639444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.639614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.639647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.639768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.639801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.639989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.640022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.640206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.640241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.640414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.640446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.640571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.640603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.640716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.640749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.640901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.640934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.641112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.641145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.641349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.641382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.641572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.641605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.641887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.641919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.642053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.642085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.642258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.642292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.642475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.642508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.642793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.642825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.643112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.643146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.643280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.643314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.643520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.643553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.643812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.643845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.400 qpair failed and we were unable to recover it. 00:27:54.400 [2024-12-10 04:14:53.643967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.400 [2024-12-10 04:14:53.644001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.644225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.644259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.644378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.644411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.644640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.644674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.644856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.644889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.645014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.645047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.645157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.645198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.645390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.645423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.645618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.645651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.645800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.645837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.646081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.646113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.646227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.646260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.646446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.646479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.646664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.646697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.646881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.646913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.647089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.647122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.647337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.647370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.647553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.647585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.647701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.647734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.647978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.648010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.648197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.648231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.648519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.648551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.648681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.648720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.648846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.648878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.648984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.649016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.649217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.649251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.649370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.649403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.649517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.649550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.649721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.649754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.649864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.649894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.650074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.650107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.650362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.650396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.650591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.650623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.650807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.650840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.401 qpair failed and we were unable to recover it. 00:27:54.401 [2024-12-10 04:14:53.651020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.401 [2024-12-10 04:14:53.651054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.651246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.651280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.651406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.651440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.651545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.651578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.651770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.651803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.651930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.651962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.652145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.652187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.652382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.652414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.652530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.652562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.652745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.652778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.652985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.653018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.653193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.653227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.653442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.653474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.653597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.653629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.653813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.653846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.654117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.654150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.654290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.654325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.654460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.654492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.654682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.654713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.654820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.654852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.654975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.655008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.655148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.655189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.655303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.655335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.655544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.655577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.655759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.655792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.656031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.656063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.656187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.656222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.656444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.656477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.656657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.656700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.656870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.656903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.657015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.657047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.657218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.657253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.657492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.657525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.657630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.657662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.657845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.657878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.658001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.658033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.658221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.658255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.658442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.658474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.658734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.658767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.402 [2024-12-10 04:14:53.658940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.402 [2024-12-10 04:14:53.658973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.402 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.659096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.659128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.659376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.659410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.659607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.659640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.659847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.659880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.660064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.660096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.660293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.660326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.660450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.660483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.660603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.660635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.660823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.660855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.660960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.660992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.661118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.661151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.661275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.661309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.661553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.661585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.661772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.661804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.662012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.662046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.662267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.662303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.662543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.662576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.662759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.662792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.662909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.662941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.663139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.663181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.663291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.663324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.663507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.663540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.663725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.663758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.663927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.663961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.664197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.664231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.664348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.664381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.664573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.664607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.664733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.664766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.664886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.664919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.665033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.665066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.665179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.665212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.665392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.665425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.665603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.665636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.665821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.665854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.665979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.666011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.666152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.666196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.666393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.666426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.666605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.666637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.666876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.403 [2024-12-10 04:14:53.666909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.403 qpair failed and we were unable to recover it. 00:27:54.403 [2024-12-10 04:14:53.667103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.404 [2024-12-10 04:14:53.667136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.404 qpair failed and we were unable to recover it. 00:27:54.404 [2024-12-10 04:14:53.667308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.404 [2024-12-10 04:14:53.667381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.404 qpair failed and we were unable to recover it. 00:27:54.404 [2024-12-10 04:14:53.667577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.404 [2024-12-10 04:14:53.667614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.404 qpair failed and we were unable to recover it. 00:27:54.404 [2024-12-10 04:14:53.667817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.404 [2024-12-10 04:14:53.667853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.668121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.668155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.668355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.668389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.668576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.668609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.668849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.668883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.669067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.669100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.669291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.669325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.669445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.669480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.669654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.669687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.669805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.669839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.670030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.670063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.670308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.670344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.670611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.670644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.670848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.670888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.671030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.671063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.671269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.671304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.681 qpair failed and we were unable to recover it. 00:27:54.681 [2024-12-10 04:14:53.671508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.681 [2024-12-10 04:14:53.671540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.671721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.671754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.671992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.672025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.672148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.672192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.672387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.672421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.672635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.672668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.672845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.672878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.673051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.673084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.673203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.673238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.673409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.673442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.673637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.673670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.673794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.673828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.673945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.673977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.674178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.674213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.674325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.674357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.674482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.674514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.674771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.674804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.675022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.675056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.675248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.675282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.675532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.675564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.675684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.675717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.675957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.675990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.676229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.676263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.676384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.676417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.676602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.676635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.676900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.676933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.677224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.677257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.677396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.677429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.677620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.677654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.677899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.677932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.678131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.678164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.678384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.678417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.678655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.678687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.678862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.678895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.679082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.679115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.679250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.679284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.679472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.679505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.679685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.679719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.679993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.680027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.682 [2024-12-10 04:14:53.680148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.682 [2024-12-10 04:14:53.680198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.682 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.680380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.680413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.680702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.680735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.680928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.680960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.681084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.681118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.681302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.681336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.681440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.681472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.681642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.681675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.681850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.681882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.682177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.682211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.682337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.682369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.682540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.682573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.682759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.682793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.683069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.683102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.683343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.683377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.683511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.683544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.683740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.683773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.683943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.683975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.684144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.684186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.684366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.684399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.684582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.684614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.684804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.684838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.685016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.685049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.685316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.685351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.685540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.685574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.685759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.685791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.685925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.685964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.686150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.686191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.686311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.686344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.686523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.686556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.686758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.686790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.686959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.686993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.687191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.687224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.687423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.687456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.687717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.687751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.687866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.687898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.688030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.688063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.688198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.688233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.688436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.688469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.688670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.683 [2024-12-10 04:14:53.688702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.683 qpair failed and we were unable to recover it. 00:27:54.683 [2024-12-10 04:14:53.688911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.688945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.689191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.689225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.689462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.689495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.689626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.689659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.689909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.689942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.690146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.690187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.690310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.690344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.690467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.690500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.690704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.690737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.690857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.690890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.691140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.691181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.691422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.691455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.691705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.691738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.691908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.691946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.692138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.692178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.692354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.692387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.692624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.692656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.692840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.692873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.693003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.693037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.693148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.693192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.693428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.693460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.693662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.693696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.693952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.693985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.694157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.694211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.694319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.694352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.694458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.694491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.694683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.694715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.694845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.694878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.695057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.695090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.695210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.695244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.695350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.695383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.695580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.695613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.695727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.695760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.695938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.695971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.696230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.696265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.696459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.696491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.696608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.696640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.696818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.696851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.697044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.697077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.684 qpair failed and we were unable to recover it. 00:27:54.684 [2024-12-10 04:14:53.697208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.684 [2024-12-10 04:14:53.697243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.697368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.697407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.697578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.697611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.697793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.697825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.697926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.697960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.698134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.698175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.698444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.698477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.698715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.698748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.698859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.698891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.699146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.699190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.699389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.699421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.699696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.699728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.699914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.699946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.700210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.700244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.700488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.700520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.700783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.700868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.701084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.701121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.701349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.701385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.701493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.701523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.701729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.701762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.702015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.702048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.702241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.702276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.702513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.702546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.702729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.702761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.702998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.703031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.703289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.703323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.703530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.703563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.703756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.703790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.703996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.704037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.704221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.704255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.704517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.704550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.704743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.704776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.704889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.704921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.705108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.705142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.705393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.705425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.705558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.705591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.705782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.705815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.685 qpair failed and we were unable to recover it. 00:27:54.685 [2024-12-10 04:14:53.706068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.685 [2024-12-10 04:14:53.706100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.706209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.706242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.706451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.706483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.706593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.706625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.706758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.706791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.706917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.706950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.707066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.707099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.707410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.707443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.707690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.707723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.707832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.707865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.708101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.708133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.708263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.708297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.708537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.708570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.708804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.708838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.708943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.708976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.709079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.709111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.709388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.709423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.709591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.709624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.709821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.709855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.710125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.710158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.710431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.710463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.710706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.710739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.710853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.710886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.711003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.711036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.711287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.711322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.711609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.711642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.711817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.711849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.712085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.712118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.712248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.712282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.712429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.712462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.712639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.712672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.712910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.712949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.713122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.713155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.713280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.713313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.713436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.713469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.713595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.713627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.686 [2024-12-10 04:14:53.713797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.686 [2024-12-10 04:14:53.713830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.686 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.714000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.714033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.714206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.714240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.714501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.714534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.714633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.714665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.714843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.714875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.715108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.715140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.715332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.715365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.715543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.715577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.715758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.715790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.715973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.716006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.716193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.716229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.716402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.716434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.716712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.716745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.716995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.717028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.717255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.717289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.717462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.717495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.717683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.717717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.717890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.717922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.718107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.718141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.718345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.718379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.718552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.718584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.718924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.718997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.719140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.719193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.719308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.719342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.719610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.719643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.719887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.719921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.720097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.720130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.720253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.720288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.720476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.720509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.720690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.720724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.720984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.721017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.721205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.721240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.721477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.721511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.721633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.721666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.721784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.721828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.722037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.722071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.722241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.722277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.722381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.722414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.687 [2024-12-10 04:14:53.722600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.687 [2024-12-10 04:14:53.722634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.687 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.722755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.722789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.722916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.722949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.723070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.723104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.723333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.723367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.723484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.723518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.723757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.723791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.723967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.724000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.724118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.724152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.724349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.724383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.724500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.724534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.724718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.724751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.724996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.725029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.725221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.725256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.725438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.725472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.725589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.725621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.725805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.725838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.726013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.726046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.726222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.726256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.726439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.726473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.726682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.726716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.726895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.726928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.727038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.727069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.727277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.727313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.727493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.727526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.727696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.727730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.727902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.727936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.728202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.728237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.728416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.728449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.728677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.728711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.728978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.729011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.729265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.729301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.729494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.729527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.729781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.729815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.730020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.730054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.730237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.730272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.730459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.730498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.730767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.730801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.731037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.688 [2024-12-10 04:14:53.731070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.688 qpair failed and we were unable to recover it. 00:27:54.688 [2024-12-10 04:14:53.731246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.731282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.731520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.731554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.731743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.731776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.731956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.731989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.732110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.732144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.732263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.732297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.732501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.732534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.732724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.732756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.732876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.732910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.733079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.733112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.733302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.733336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.733593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.733627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.733817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.733850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.733976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.734009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.734192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.734227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.734400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.734433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.734616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.734649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.734755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.734788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.734968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.735003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.735179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.735213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.735485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.735519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.735699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.735732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.735840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.735874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.736084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.736118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.736327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.736363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.736622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.736655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.736766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.736799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.737060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.737094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.737380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.737415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.737594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.737628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.737811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.737845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.738035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.738067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.738242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.738277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.738467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.738500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.738681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.738714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.738966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.738999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.739201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.739237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.739352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.739384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.739563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.689 [2024-12-10 04:14:53.739597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.689 qpair failed and we were unable to recover it. 00:27:54.689 [2024-12-10 04:14:53.739727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.739761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.739881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.739913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.740094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.740128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.740254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.740289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.740421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.740455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.740638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.740671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.740878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.740911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.741024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.741058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.741175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.741209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.741418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.741451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.741685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.741718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.741894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.741928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.742120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.742154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.742335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.742368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.742477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.742510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.742681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.742714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.742901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.742933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.743135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.743178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.743467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.743501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.743686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.743719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.743906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.743940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.744048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.744080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.744203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.744238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.744356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.744390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.744629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.744661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.744768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.744808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.744991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.745024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.745196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.745230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.745350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.745383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.745505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.745539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.745719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.745752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.745922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.745955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.746091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.746125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.746381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.690 [2024-12-10 04:14:53.746415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.690 qpair failed and we were unable to recover it. 00:27:54.690 [2024-12-10 04:14:53.746585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.746618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.746877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.746911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.747026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.747060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.747248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.747283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.747382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.747415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.747607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.747640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.747816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.747850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.748115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.748148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.748348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.748382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.748514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.748547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.748720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.748754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.748863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.748896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.749092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.749126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.749247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.749282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.749386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.749420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.749609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.749642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.749848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.749882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.750065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.750098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.750214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.750250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.750378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.750411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.750651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.750684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.750857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.750891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.751130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.751162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.751349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.751383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.751573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.751605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.751808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.751841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.752068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.752102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.752301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.752336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.752524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.752557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.752744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.752778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.752972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.753004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.753242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.753289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.753463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.753497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.753663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.753696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.753827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.753861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.754121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.754155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.754358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.754392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.754517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.754550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.754738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.754771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.691 qpair failed and we were unable to recover it. 00:27:54.691 [2024-12-10 04:14:53.755051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.691 [2024-12-10 04:14:53.755083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.755258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.755293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.755506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.755539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.755728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.755762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.755933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.755965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.756093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.756127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.756325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.756360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.756474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.756507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.756714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.756747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.756857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.756890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.757128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.757161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.757411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.757443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.757569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.757602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.757772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.757805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.757938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.757971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.758149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.758190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.758366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.758400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.758649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.758682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.758880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.758913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.759180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.759216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.759455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.759488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.759676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.759709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.759826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.759859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.760049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.760082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.760270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.760305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.760543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.760576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.760701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.760734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.760852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.760884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.761019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.761052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.761183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.761216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.761394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.761426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.761615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.761648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.761933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.761972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.762100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.762132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.762331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.762366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.762568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.762600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.762724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.762756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.763001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.763034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.763217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.763251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.692 [2024-12-10 04:14:53.763425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.692 [2024-12-10 04:14:53.763459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.692 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.763631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.763664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.763835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.763869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.763982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.764017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.764220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.764255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.764522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.764555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.764795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.764828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.765085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.765119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.765240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.765274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.765448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.765481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.765651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.765684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.765811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.765843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.766030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.766063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.766337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.766372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.766613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.766646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.766820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.766854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.767038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.767070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.767319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.767353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.767540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.767573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.767812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.767845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.768055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.768088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.768229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.768264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.768376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.768409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.768669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.768702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.768806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.768838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.769037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.769070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.769201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.769235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.769432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.769465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.769729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.769762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.769963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.769996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.770205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.770238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.770341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.770374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.770557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.770591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.770721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.770760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.770948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.770980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.771219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.771253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.771551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.771584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.771758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.771791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.771977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.772010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.693 [2024-12-10 04:14:53.772215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.693 [2024-12-10 04:14:53.772250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.693 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.772421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.772455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.772591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.772623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.772892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.772925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.773163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.773206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.773326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.773359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.773527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.773560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.773771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.773805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.773931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.773964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.774064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.774097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.774210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.774245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.774509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.774541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.774730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.774762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.774938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.774971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.775073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.775106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.775361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.775394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.775569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.775603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.775791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.775823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.775998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.776031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.776265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.776299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.776512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.776545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.776792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.776825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.776938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.776971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.777142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.777182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.777370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.777402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.777591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.777624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.777887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.777920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.778092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.778125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.778418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.778452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.778689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.778723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.778914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.778947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.779069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.779102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.779288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.779322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.779585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.779618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.779879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.779917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.694 [2024-12-10 04:14:53.780105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.694 [2024-12-10 04:14:53.780138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.694 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.780402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.780437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.780567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.780600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.780838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.780871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.780985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.781019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.781206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.781248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.781354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.781385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.781591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.781624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.781840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.781873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.781978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.782010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.782216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.782250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.782382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.782415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.782627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.782660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.782783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.782817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.783061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.783094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.783227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.783261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.783467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.783500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.783736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.783769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.784034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.784066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.784274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.784308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.784425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.784459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.784646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.784679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.784813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.784847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.785029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.785063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.785251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.785285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.785405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.785438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.785616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.785650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.785768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.785800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.785931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.785965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.786209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.786243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.786348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.786381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.786650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.786683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.786874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.786906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.787036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.787069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.787188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.787222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.787401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.787434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.787542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.787575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.787702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.787735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.787841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.787875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.695 [2024-12-10 04:14:53.788048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.695 [2024-12-10 04:14:53.788086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.695 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.788209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.788244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.788364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.788397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.788668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.788701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.788921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.788954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.789136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.789181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.789353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.789385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.789568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.789601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.789776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.789808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.789933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.789966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.790145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.790188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.790425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.790459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.790593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.790626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.790811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.790844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.791043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.791077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.791289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.791323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.791511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.791544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.791744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.791778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.792029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.792062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.792235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.792268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.792459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.792492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.792611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.792644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.792830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.792863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.793035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.793068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.793310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.793343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.793609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.793642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.793900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.793933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.794147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.794200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.794468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.794501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.794695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.794729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.794835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.794868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.795058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.795091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.795283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.795318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.795516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.795549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.795733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.795766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.795950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.795982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.796189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.796224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.796407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.796439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.796558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.796591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.696 [2024-12-10 04:14:53.796772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.696 [2024-12-10 04:14:53.796805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.696 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.797017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.797061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.797230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.797265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.797436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.797470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.797735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.797768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.797964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.797997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.798186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.798221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.798414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.798447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.798710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.798743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.798923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.798956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.799204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.799240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.799464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.799497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.799626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.799659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.799904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.799936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.800127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.800159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.800365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.800400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.800686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.800718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.800829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.800862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.801044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.801077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.801258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.801293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.801415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.801447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.801621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.801653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.801904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.801937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.802047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.802080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.802259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.802294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.802475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.802508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.802747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.802780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.802974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.803007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.803202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.803237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.803409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.803442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.803627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.803660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.803923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.803956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.804222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.804256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.804485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.804518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.804706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.804739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.804912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.804944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.805130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.805163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.805278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.805311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.805506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.805539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.805800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.697 [2024-12-10 04:14:53.805833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.697 qpair failed and we were unable to recover it. 00:27:54.697 [2024-12-10 04:14:53.806014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.806046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.806151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.806199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.806367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.806401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.806637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.806669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.806850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.806883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.807141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.807183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.807371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.807403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.807605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.807638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.807741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.807773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.807956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.807989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.808249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.808284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.808407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.808440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.808567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.808601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.808781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.808814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.808985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.809019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.809210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.809245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.809366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.809399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.809515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.809548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.809737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.809770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.810008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.810042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.810232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.810265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.810457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.810490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.810680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.810713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.810898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.810930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.811103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.811136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.811299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.811372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.811498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.811534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.811778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.811812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.811993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.812029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.812203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.812238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.812443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.812476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.812715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.812748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.812967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.813000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.813124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.813156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.813348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.813382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.813486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.813519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.813710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.813743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.813924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.813957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.698 [2024-12-10 04:14:53.814145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.698 [2024-12-10 04:14:53.814185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.698 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.814302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.814335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.814528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.814561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.814800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.814838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.814942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.814975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.815216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.815251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.815449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.815482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.815661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.815694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.815878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.815911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.816128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.816161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.816290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.816324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.816442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.816476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.816714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.816746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.816919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.816952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.817144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.817188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.817324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.817357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.817488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.817521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.817734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.817767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.817954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.817987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.818161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.818206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.818400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.818433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.818606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.818639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.818762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.818795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.818976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.819008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.819126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.819159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.819406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.819440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.819615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.819648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.819853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.819886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.820071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.820104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.820220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.820255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.820436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.820470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.820649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.820683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.820889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.820922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.821201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.821235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.821473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.699 [2024-12-10 04:14:53.821506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.699 qpair failed and we were unable to recover it. 00:27:54.699 [2024-12-10 04:14:53.821679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.821712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.821881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.821913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.822038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.822071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.822245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.822280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.822462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.822496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.822676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.822708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.822882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.822915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.823128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.823160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.823483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.823523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.823643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.823675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.823919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.823951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.824143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.824194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.824440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.824473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.824659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.824691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.824888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.824922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.825107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.825140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.825388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.825422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.825539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.825572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.825776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.825808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.825930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.825963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.826087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.826120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.826253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.826287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.826410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.826443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.826683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.826717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.826891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.826924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.827047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.827081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.827253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.827288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.827545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.827578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.827696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.827729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.827930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.827963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.828085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.828118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.828340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.828374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.828618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.828651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.828757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.828790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.828997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.829030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.829157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.829199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.829437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.829470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.829659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.829692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.829897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.829929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.700 qpair failed and we were unable to recover it. 00:27:54.700 [2024-12-10 04:14:53.830122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.700 [2024-12-10 04:14:53.830155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.830285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.830319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.830503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.830536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.830651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.830684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.830870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.830904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.831076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.831108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.831242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.831277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.831399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.831432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.831671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.831703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.831906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.831946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.832159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.832221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.832363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.832397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.832510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.832543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.832710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.832743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.832933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.832967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.833099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.833132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.833327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.833362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.833603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.833636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.833755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.833787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.833909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.833944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.834187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.834222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.834414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.834448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.834554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.834588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.834774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.834809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.834934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.834968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.835079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.835113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.835336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.835372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.835622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.835655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.835832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.835867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.836132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.836176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.836292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.836325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.836526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.836560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.836689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.836721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.836903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.836936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.837122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.837156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.837305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.837340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.837467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.837502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.837622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.837656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.837847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.837880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.701 [2024-12-10 04:14:53.838000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.701 [2024-12-10 04:14:53.838035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.701 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.838210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.838244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.838367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.838400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.838578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.838611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.838785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.838818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.838945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.838978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.839217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.839253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.839445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.839479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.839617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.839650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.839828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.839863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.840044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.840082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.840254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.840288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.840494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.840529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.840796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.840830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.840935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.840969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.841095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.841128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.841264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.841300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.841423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.841456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.841633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.841666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.841866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.841899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.842033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.842067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.842196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.842231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.842420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.842453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.842563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.842596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.842806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.842840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.843018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.843051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.843308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.843341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.843457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.843491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.843697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.843730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.843914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.843947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.844080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.844114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.844300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.844334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.844513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.844547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.844655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.844688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.844792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.844826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.845015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.845048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.845180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.845215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.845413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c0f0 is same with the state(6) to be set 00:27:54.702 [2024-12-10 04:14:53.845759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.845833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.845976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.702 [2024-12-10 04:14:53.846014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.702 qpair failed and we were unable to recover it. 00:27:54.702 [2024-12-10 04:14:53.846279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.846319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.846500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.846535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.846710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.846742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.847010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.847042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.847321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.847357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.847601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.847634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.847828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.847863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.848000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.848032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.848222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.848259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.848525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.848558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.848746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.848780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.848907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.848949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.849140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.849185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.849373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.849406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.849601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.849635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.849767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.849801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.850010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.850043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.850230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.850264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.850526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.850559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.850742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.850776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.850958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.850992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.851175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.851210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.851463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.851496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.851615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.851648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.851837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.851869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.852046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.852080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.852269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.852304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.852479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.852511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.852689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.852723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.852967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.853001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.853185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.853220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.853332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.853366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.853579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.853613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.853814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.853848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.854026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.854059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.703 [2024-12-10 04:14:53.854249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.703 [2024-12-10 04:14:53.854284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.703 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.854401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.854434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.854611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.854643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.854852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.854892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.855026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.855058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.855244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.855279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.855468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.855501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.855678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.855712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.855897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.855929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.856105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.856138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.856337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.856370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.856543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.856576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.856693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.856726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.856925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.856957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.857066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.857098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.857337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.857371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.857561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.857594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.857783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.857817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.858001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.858034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.858137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.858182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.858372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.858405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.858593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.858627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.858834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.858867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.859039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.859073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.859210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.859245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.859447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.859480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.859691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.859724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.859841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.859875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.860014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.860047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.860220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.860256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.860443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.860481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.860715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.860748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.860957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.860989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.861120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.861153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.861289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.861323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.861454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.861487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.861707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.861740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.861909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.861943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.862063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.862095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.862274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.704 [2024-12-10 04:14:53.862310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.704 qpair failed and we were unable to recover it. 00:27:54.704 [2024-12-10 04:14:53.862480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.862511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.862717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.862751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.862948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.862980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.863161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.863208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.863411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.863445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.863670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.863702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.863896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.863929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.864205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.864240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.864429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.864463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.864652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.864686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.864888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.864919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.865029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.865063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.865349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.865389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.865642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.865674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.865852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.865885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.866004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.866036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.866217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.866252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.866374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.866406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.866534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.866569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.866758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.866790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.866965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.866997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.867247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.867283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.867526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.867558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.867684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.867716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.867910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.867944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.868117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.868151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.868374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.868406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.868540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.868574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.868752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.868784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.868964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.868998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.869175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.869210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.869409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.869444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.869634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.869667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.869775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.869808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.869939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.869971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.870086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.870120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.870312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.870347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.870531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.870566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.870687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.870719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.705 [2024-12-10 04:14:53.870831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.705 [2024-12-10 04:14:53.870864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.705 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.870973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.871006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.871192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.871228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.871520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.871555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.871691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.871724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.871896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.871929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.872119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.872152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.872370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.872404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.872535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.872568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.872681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.872715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.872901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.872935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.873121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.873154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.873350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.873384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.873520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.873552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.873743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.873776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.873949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.873982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.874164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.874206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.874343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.874376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.874564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.874598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.874710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.874749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.874955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.874987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.875195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.875231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.875377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.875409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.875586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.875620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.875885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.875918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.876101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.876134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.876262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.876297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.876413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.876446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.876618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.876652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.876916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.876949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.877147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.877188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.877374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.877407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.877595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.877628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.877803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.877838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.878014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.878046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.878153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.878196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.878304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.878336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.878540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.878573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.878815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.878847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.879037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.879070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.706 qpair failed and we were unable to recover it. 00:27:54.706 [2024-12-10 04:14:53.879209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.706 [2024-12-10 04:14:53.879244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.879434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.879467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.879594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.879627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.879840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.879872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.880061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.880094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.880277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.880313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.880430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.880470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.880597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.880629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.880762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.880795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.880976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.881008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.881151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.881193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.881313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.881346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.881548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.881580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.881699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.881730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.881853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.881885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.882074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.882107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.882292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.882326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.882508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.882541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.882717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.882750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.882933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.882966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.883093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.883126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.883310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.883344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.883534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.883566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.883687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.883719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.883841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.883873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.884042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.884076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.884262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.884295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.884484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.884516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.884689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.884724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.884849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.884881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.885008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.885041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.885259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.885293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.885422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.885454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.885719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.885752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.885946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.885980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.886274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.886308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.886575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.886607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.886843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.886876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.887070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.887104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.707 [2024-12-10 04:14:53.887385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.707 [2024-12-10 04:14:53.887420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.707 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.887606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.887638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.887881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.887915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.888029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.888062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.888204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.888237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.888361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.888395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.888651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.888683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.888858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.888890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.889157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.889204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.889396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.889429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.889610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.889644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.889833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.889866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.890079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.890112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.890246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.890280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.890466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.890500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.890741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.890774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.890896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.890928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.891112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.891145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.891273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.891306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.891513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.891546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.891738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.891770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.891984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.892017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.892205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.892240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.892364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.892397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.892589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.892622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.892811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.892845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.892963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.892995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.893113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.893152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.893306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.893337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.893457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.893487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.893591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.893622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.893818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.893851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.894040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.894073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.894324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.894360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.894484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.708 [2024-12-10 04:14:53.894516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.708 qpair failed and we were unable to recover it. 00:27:54.708 [2024-12-10 04:14:53.894621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.894661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.894802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.894835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.895077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.895110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.895306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.895340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.895602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.895635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.895807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.895840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.895962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.895994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.896105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.896138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.896282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.896316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.896487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.896519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.896687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.896720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.896962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.896994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.897314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.897348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.897531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.897564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.897776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.897809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.898082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.898115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.898295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.898330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.898467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.898500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.898618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.898651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.898766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.898799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.898968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.899001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.899312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.899346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.899455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.899487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.899699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.899732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.899916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.899949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.900064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.900097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.900339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.900374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.900589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.900628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.900869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.900902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.901025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.901059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.901240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.901275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.901542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.901575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.901786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.901819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.901935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.901968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.902209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.902243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.902378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.902411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.902598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.902631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.902745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.902777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.903011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.903045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.709 [2024-12-10 04:14:53.903153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.709 [2024-12-10 04:14:53.903195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.709 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.903366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.903399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.903528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.903561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.903744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.903778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.903888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.903922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.904060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.904093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.904282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.904317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.904431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.904465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.904675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.904709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.905047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.905080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.905211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.905245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.905426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.905458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.905634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.905667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.905864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.905896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.906064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.906097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.906234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.906275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.906460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.906492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.906672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.906705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.906810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.906843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.907038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.907070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.907313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.907348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.907588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.907619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.907862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.907896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.908093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.908125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.908326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.908360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.908602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.908636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.908872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.908906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.909086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.909119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.909351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.909386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.909587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.909620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.909840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.909873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.909987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.910020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.910205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.910240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.910428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.910461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.910651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.910684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.910815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.910848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.911085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.911119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.911313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.911347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.911459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.911492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.911671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.710 [2024-12-10 04:14:53.911705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.710 qpair failed and we were unable to recover it. 00:27:54.710 [2024-12-10 04:14:53.911935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.911967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.912097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.912131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.912285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.912319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.912527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.912559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.912740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.912773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.912880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.912913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.913186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.913220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.913396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.913429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.913626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.913660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.913873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.913905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.914103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.914135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.914322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.914358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.914643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.914675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.914893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.914926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.915037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.915070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.915202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.915238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.915469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.915544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.915759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.915796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.915990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.916025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.916251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.916288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.916426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.916458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.916723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.916758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.916880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.916913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.917025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.917059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.917312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.917347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.917477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.917510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.917633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.917667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.917796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.917830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.918096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.918128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.918315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.918349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.918498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.918532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.918713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.918746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.918930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.918963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.919209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.919244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.919422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.919455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.919698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.919731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.919876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.919909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.920098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.920131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.920343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.920377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.711 qpair failed and we were unable to recover it. 00:27:54.711 [2024-12-10 04:14:53.920501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.711 [2024-12-10 04:14:53.920534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.920662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.920695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.920820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.920855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.921092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.921126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.921325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.921360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.921540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.921572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.921684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.921718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.921921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.921953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.922078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.922111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.922381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.922416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.922604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.922638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.922820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.922853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.922962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.922993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.923195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.923230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.923404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.923437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.923623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.923656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.923908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.923942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.924116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.924154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.924292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.924326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.924508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.924541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.924672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.924705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.924946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.924981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.925156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.925196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.925319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.925351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.925531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.925565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.925694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.925727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.925923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.925956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.926082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.926113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.926231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.926267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.926458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.926492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.926708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.926741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.926967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.712 [2024-12-10 04:14:53.927001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.712 qpair failed and we were unable to recover it. 00:27:54.712 [2024-12-10 04:14:53.927195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.927229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.927352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.927385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.927601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.927637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.927941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.927976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.928082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.928115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.928300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.928334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.928444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.928477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.928644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.928678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.928865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.928898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.929005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.929036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.929142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.929190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.929333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.929366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.929552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.929585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.929801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.929834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.930023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.930056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.930248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.930283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.930463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.930496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.930755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.930789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.930897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.930931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.931117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.931151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.931378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.931410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.931531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.931565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.931679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.931712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.931831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.931864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.932056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.932088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.932356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.932399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.932576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.932609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.932731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.932765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.933008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.933042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.933279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.933314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.933516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.933548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.933791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.933824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.933934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.933966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.934091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.934124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.934319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.934354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.934537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.934570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.934824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.934858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.935048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.935081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.935263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.713 [2024-12-10 04:14:53.935297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.713 qpair failed and we were unable to recover it. 00:27:54.713 [2024-12-10 04:14:53.935424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.935458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.935640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.935674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.935846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.935879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.936010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.936042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.936221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.936254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.936368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.936401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.936572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.936606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.936738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.936770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.936956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.936989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.937114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.937150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.937362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.937397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.937514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.937545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.937654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.937688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.937814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.937847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.938036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.938069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.938205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.938240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.938360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.938393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.938599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.938631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.938815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.938848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.939033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.939067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.939270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.939305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.939479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.939514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.939698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.939731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.939837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.939869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.940108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.940141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.940347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.940381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.940678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.940717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.940891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.940924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.941131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.941163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.941313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.941346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.941549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.941583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.941850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.941882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.942073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.942106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.942296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.942329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.942516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.942550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.942721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.942753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.942946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.942978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.943184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.943219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.943408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.943440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.714 qpair failed and we were unable to recover it. 00:27:54.714 [2024-12-10 04:14:53.943632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.714 [2024-12-10 04:14:53.943666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.715 qpair failed and we were unable to recover it. 00:27:54.715 [2024-12-10 04:14:53.943808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.715 [2024-12-10 04:14:53.943842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.715 qpair failed and we were unable to recover it. 00:27:54.715 [2024-12-10 04:14:53.944030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.715 [2024-12-10 04:14:53.944061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.715 qpair failed and we were unable to recover it. 00:27:54.715 [2024-12-10 04:14:53.944257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.715 [2024-12-10 04:14:53.944292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.715 qpair failed and we were unable to recover it. 00:27:54.715 [2024-12-10 04:14:53.944419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.715 [2024-12-10 04:14:53.944453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.715 qpair failed and we were unable to recover it. 00:27:54.715 [2024-12-10 04:14:53.944699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.715 [2024-12-10 04:14:53.944731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.715 qpair failed and we were unable to recover it. 00:27:54.715 [2024-12-10 04:14:53.944941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.715 [2024-12-10 04:14:53.944975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.715 qpair failed and we were unable to recover it. 00:27:54.715 [2024-12-10 04:14:53.945103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.715 [2024-12-10 04:14:53.945135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.715 qpair failed and we were unable to recover it. 00:27:54.715 [2024-12-10 04:14:53.945260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.715 [2024-12-10 04:14:53.945293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.715 qpair failed and we were unable to recover it. 00:27:54.715 [2024-12-10 04:14:53.945465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.715 [2024-12-10 04:14:53.945498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.715 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.945718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.945751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.945869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.945903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.946176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.946211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.946477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.946511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.946630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.946662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.946792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.946825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.947009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.947043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.947281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.947316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.947521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.947554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.947737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.947770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.947954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.947988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.948191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.948225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.948342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.948374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.948488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.948520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.948641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.948672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.948852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.948886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.949065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.949099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.949222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.949262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.949447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.949479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.949695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.949729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.949858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.949891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.950090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.950123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.950312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.995 [2024-12-10 04:14:53.950347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.995 qpair failed and we were unable to recover it. 00:27:54.995 [2024-12-10 04:14:53.950536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.950570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.950742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.950774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.950968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.951000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.951137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.951179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.951420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.951453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.951634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.951667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.951802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.951835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.951960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.951992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.952129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.952164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.952376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.952409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.952523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.952555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.952679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.952712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.952887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.952920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.953122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.953155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.953277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.953310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.953426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.953458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.953573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.953606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.953809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.953842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.954030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.954064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.954238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.954273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.954465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.954498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.954631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.954665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.954850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.954882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.955073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.955106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.955311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.955346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.955461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.955492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.955668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.955700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.955885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.955918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.956042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.956074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.956183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.956217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.956360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.956392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.956587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.956621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.956740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.956772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.956956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.956988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.957246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.957285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.957403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.957434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.957562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.957596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.957731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.957762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.957880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.957911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.996 qpair failed and we were unable to recover it. 00:27:54.996 [2024-12-10 04:14:53.958015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.996 [2024-12-10 04:14:53.958047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.958224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.958258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.958481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.958514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.958762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.958794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.958990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.959025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.959208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.959242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.959467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.959499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.959697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.959730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.959859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.959890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.960082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.960115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.960329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.960363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.960545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.960577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.960750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.960784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.960888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.960920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.961107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.961141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.961327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.961359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.961476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.961509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.961682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.961715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.961886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.961918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.962159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.962203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.962332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.962364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.962563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.962596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.962710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.962744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.963024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.963057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.963197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.963232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.963344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.963376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.963562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.963595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.963711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.963742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.964027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.964060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.964211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.964245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.964417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.964450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.964564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.964598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.964799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.964833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.964951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.964983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.965183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.965218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.965436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.965476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.965651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.965683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.965901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.965935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.997 [2024-12-10 04:14:53.966069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.997 [2024-12-10 04:14:53.966102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.997 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.966211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.966244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.966435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.966469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.966749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.966782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.966967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.967000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.967117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.967150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.967338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.967372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.967546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.967578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.967754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.967786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.967990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.968023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.968147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.968200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.968391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.968425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.968605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.968636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.968808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.968842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.969033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.969067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.969199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.969232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.969417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.969450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.969637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.969670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.969847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.969880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.970061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.970094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.970242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.970276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.970561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.970592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.970835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.970867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.971051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.971084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.971287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.971321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.971429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.971463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.971701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.971734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.971927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.971959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.972216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.972249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.972533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.972565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.972705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.972738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.972938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.972972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.973147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.973188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.973323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.973356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.973646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.973680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.973818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.973850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.973985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.974021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.974139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.974185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.974327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.974360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.974563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.998 [2024-12-10 04:14:53.974596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.998 qpair failed and we were unable to recover it. 00:27:54.998 [2024-12-10 04:14:53.974774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.974806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.974930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.974963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.975177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.975213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.975394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.975426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.975608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.975641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.975903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.975937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.976043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.976075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.976265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.976299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.976449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.976481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.976621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.976654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.976786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.976818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.976952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.976984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.977228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.977262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.977369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.977402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.977525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.977557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.977670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.977703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.977907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.977941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.978120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.978154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.978341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.978374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.978567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.978600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.978709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.978740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.978869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.978900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.979014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.979047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.979240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.979275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.979593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.979665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.979875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.979912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.980102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.980135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.980340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.980375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.980548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.980581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.980706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.980739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.980865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.980898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.981087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.981120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.981262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.981296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.981410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.999 [2024-12-10 04:14:53.981442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:54.999 qpair failed and we were unable to recover it. 00:27:54.999 [2024-12-10 04:14:53.981618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.981651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.981830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.981862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.981992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.982024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.982197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.982242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.982388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.982422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.982598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.982631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.982817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.982850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.983040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.983073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.983247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.983282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.983521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.983554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.983735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.983769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.983883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.983915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.984019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.984052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.984246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.984280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.984525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.984558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.984791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.984824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.985000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.985033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.985183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.985218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.985404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.985436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.985631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.985664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.985778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.985811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.985990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.986024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.986206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.986240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.986440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.986473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.986575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.986608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.986788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.986821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.986953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.986986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.987113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.987145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.987286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.987319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.987493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.987526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.987726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.987762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.987898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.987933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.988120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.988152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.988361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.988395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.988637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.988671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.988843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.988876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.988998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.989030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.989219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.989254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.989498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.000 [2024-12-10 04:14:53.989531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.000 qpair failed and we were unable to recover it. 00:27:55.000 [2024-12-10 04:14:53.989771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.989804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.989979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.990012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.990242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.990289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.990417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.990451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.990657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.990698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.990828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.990860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.990965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.990999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.991185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.991220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.991469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.991501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.991615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.991650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.991827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.991859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.992050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.992084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.992258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.992292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.992462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.992495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.992687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.992721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.992984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.993017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.993122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.993155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.993310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.993344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.993521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.993555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.993746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.993779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.993953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.993988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.994187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.994221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.994337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.994371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.994476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.994510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.994723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.994758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.994879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.994911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.995090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.995125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.995278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.995312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.995552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.995585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.995692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.995724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.995842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.995876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.996062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.996096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.996342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.996377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.996557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.996589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.996849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.996882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.997175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.997209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.997397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.997430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.997635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.997668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.997861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.997896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.001 qpair failed and we were unable to recover it. 00:27:55.001 [2024-12-10 04:14:53.998005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.001 [2024-12-10 04:14:53.998038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:53.998228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:53.998264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:53.998505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:53.998537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:53.998719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:53.998751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:53.999002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:53.999035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:53.999231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:53.999271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:53.999538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:53.999572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:53.999711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:53.999744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:53.999984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.000017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.000214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.000249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.000429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.000463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.000579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.000612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.000806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.000839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.001083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.001116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.001249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.001284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.001461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.001494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.001753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.001787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.001923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.001957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.002068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.002102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.002306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.002341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.002467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.002501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.002680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.002713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.002831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.002864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.003078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.003111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.003317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.003352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.003544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.003577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.003824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.003857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.004118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.004151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.004285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.004319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.004491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.004526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.004780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.004813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.005008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.005042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.005189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.005224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.005353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.005385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.005652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.005686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.005942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.005975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.006243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.006278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.006403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.006435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.006612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.006645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.002 [2024-12-10 04:14:54.006892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.002 [2024-12-10 04:14:54.006926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.002 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.007113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.007147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.007333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.007367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.007487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.007520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.007634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.007666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.007788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.007821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.008003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.008041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.008160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.008225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.008354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.008386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.008514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.008548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.008813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.008846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.008977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.009009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.009122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.009156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.009361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.009393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.009509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.009542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.009656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.009689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.009868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.009903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.010084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.010116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.010390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.010424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.010546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.010579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.010833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.010866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.011117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.011150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.011282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.011316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.011446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.011478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.011600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.011632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.011751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.011784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.011984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.012016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.012145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.012189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.012429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.012461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.012657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.012692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.012899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.012932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.013111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.013145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.013279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.013313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.013455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.013489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.013665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.013698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.013884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.013924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.014109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.014142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.014421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.014455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.014642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.014675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.014879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.003 [2024-12-10 04:14:54.014911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.003 qpair failed and we were unable to recover it. 00:27:55.003 [2024-12-10 04:14:54.015243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.015278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.015527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.015560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.015732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.015765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.015952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.015986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.016095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.016129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.016338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.016372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.016572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.016613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.016876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.016909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.017040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.017074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.017251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.017286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.017402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.017436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.017552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.017586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.017796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.017830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.017963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.017997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.018110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.018144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.018341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.018376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.018486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.018519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.018642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.018675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.018940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.018975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.019107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.019140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.019342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.019378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.019565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.019600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.019751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.019786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.019966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.020000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.020210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.020245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.020447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.020481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.020626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.020665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.020782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.020815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.021030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.021065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.021313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.021349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.021589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.021623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.021830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.021865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.022053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.022089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.022363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.022400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.004 [2024-12-10 04:14:54.022602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.004 [2024-12-10 04:14:54.022637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.004 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.022751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.022783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.022955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.022990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.023107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.023159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.023295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.023330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.023472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.023505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.023689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.023724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.023898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.023931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.024047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.024080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.024266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.024302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.024481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.024537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.024719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.024753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.025009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.025044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.025235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.025271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.025454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.025487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.025753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.025787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.025984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.026030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.026211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.026246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.026417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.026450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.026639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.026672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.026859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.026893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.027089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.027123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.027255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.027288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.027406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.027439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.027564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.027599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.027714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.027747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.027868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.027901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.028083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.028117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.028326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.028360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.028551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.028584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.028759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.028792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.029035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.029069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.029262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.029297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.029471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.029507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.029622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.029655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.029841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.029874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.030056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.030089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.030279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.030314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.030425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.030459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.005 [2024-12-10 04:14:54.030666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.005 [2024-12-10 04:14:54.030707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.005 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.030881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.030914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.031089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.031124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.031382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.031415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.031535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.031568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.031752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.031785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.031905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.031938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.032157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.032222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.032358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.032390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.032589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.032623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.032912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.032946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.033140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.033186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.033313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.033346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.033485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.033518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.033659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.033691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.033809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.033843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.034030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.034062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.034240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.034275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.034479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.034512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.034712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.034747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.034933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.034965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.035185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.035218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.035347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.035382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.035555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.035588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.035783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.035816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.035994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.036026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.036209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.036244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.036360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.036393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.036628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.036662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.036796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.036828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.037064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.037097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.037210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.037244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.037431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.037464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.037577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.037611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.037715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.037749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.037871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.037903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.038118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.038151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.038355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.038389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.038587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.038621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.038730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.006 [2024-12-10 04:14:54.038763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.006 qpair failed and we were unable to recover it. 00:27:55.006 [2024-12-10 04:14:54.038954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.038993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.039251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.039286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.039477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.039510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.039640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.039673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.039856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.039889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.040077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.040110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.040320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.040355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.040483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.040516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.040642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.040674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.040858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.040891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.041153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.041196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.041301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.041333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.041507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.041539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.041713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.041745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.041878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.041911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.042023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.042056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.042241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.042275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.042419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.042451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.042642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.042675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.042856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.042889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.043080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.043113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.043318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.043354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.043548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.043580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.043791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.043825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.044028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.044062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.044187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.044221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.044428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.044461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.044665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.044700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.044885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.044918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.045100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.045133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.045346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.045381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.045570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.045603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.045844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.045883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.046000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.046032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.046275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.046308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.046483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.046517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.046702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.046735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.046921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.046953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.047136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.007 [2024-12-10 04:14:54.047179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.007 qpair failed and we were unable to recover it. 00:27:55.007 [2024-12-10 04:14:54.047299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.047332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.047531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.047571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.047680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.047714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.047826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.047859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.047974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.048006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.048199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.048234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.048419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.048452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.048630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.048662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.048912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.048946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.049079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.049111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.049324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.049359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.049533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.049566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.049740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.049773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.049896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.049930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.050038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.050077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.050264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.050299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.050412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.050445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.050631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.050663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.050840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.050873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.051008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.051041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.051303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.051338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.051466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.051502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.051676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.051709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.051971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.052004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.052113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.052146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.052281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.052317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.052557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.052589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.052796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.052829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.053015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.053047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.053152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.053196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.053390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.053423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.053598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.053631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.053839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.053872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.053992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.054026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.054208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.008 [2024-12-10 04:14:54.054243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.008 qpair failed and we were unable to recover it. 00:27:55.008 [2024-12-10 04:14:54.054433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.054466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.054604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.054637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.054814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.054847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.055020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.055051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.055232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.055266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.055473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.055506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.055683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.055721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.055921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.055953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.056077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.056112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.056247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.056280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.056544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.056576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.056755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.056788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.056904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.056937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.057131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.057164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.057306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.057339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.057602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.057635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.057925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.057957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.058071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.058104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.058324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.058358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.058540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.058573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.058757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.058789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.058969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.059002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.059108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.059141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.059324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.059357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.059550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.059584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.059716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.059749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.059921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.059953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.060081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.060115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.060393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.060427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.060630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.060663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.060785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.060818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.060938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.060971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.061140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.061192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.061304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.061338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.061453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.061486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.061600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.061632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.061768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.061801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.061907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.061940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.062122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.062156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.009 qpair failed and we were unable to recover it. 00:27:55.009 [2024-12-10 04:14:54.062352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.009 [2024-12-10 04:14:54.062385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.062556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.062588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.062760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.062793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.062970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.063004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.063188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.063221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.063364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.063398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.063591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.063625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.063739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.063778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.063889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.063923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.064055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.064088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.064260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.064294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.064401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.064434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.064566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.064599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.064712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.064744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.064925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.064958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.065086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.065121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.065385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.065418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.065596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.065630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.065802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.065834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.066039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.066071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.066298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.066332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.066538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.066571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.066773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.066806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.067080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.067112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.067245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.067280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.067554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.067586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.067776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.067809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.067935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.067967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.068148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.068191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.068333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.068366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.068499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.068532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.068653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.068686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.068815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.068847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.069045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.069077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.069218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.069253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.069367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.069400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.069653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.069687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.069794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.069826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.069949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.069982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.010 [2024-12-10 04:14:54.070251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.010 [2024-12-10 04:14:54.070285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.010 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.070492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.070525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.070703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.070736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.070977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.071011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.071149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.071194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.071459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.071491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.071611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.071643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.071882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.071914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.072091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.072129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.072261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.072295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.072417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.072450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.072587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.072619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.072801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.072834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.073011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.073044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.073261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.073295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.073476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.073508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.073748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.073781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.073909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.073941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.074059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.074092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.074211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.074246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.074365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.074397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.074593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.074625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.074817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.074850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.075025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.075058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.075235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.075268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.075446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.075480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.075657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.075689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.075860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.075893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.076027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.076062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.076183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.076215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.076476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.076508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.076718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.076751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.076933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.076965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.077194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.077229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.077403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.077436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.077564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.077596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.077860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.077893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.078043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.078076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.078263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.078298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.011 [2024-12-10 04:14:54.078481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.011 [2024-12-10 04:14:54.078514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.011 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.078780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.078812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.078948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.078980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.079094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.079127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.079251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.079285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.079473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.079507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.079698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.079730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.080016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.080048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.080177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.080212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.080407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.080446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.080630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.080663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.080784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.080817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.081058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.081090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.081207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.081240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.081445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.081478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.081675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.081708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.081828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.081860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.081969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.081999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.082185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.082220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.082359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.082392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.082584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.082617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.082905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.082938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.083187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.083222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.083337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.083371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.083660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.083692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.083880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.083914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.084089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.084124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.084259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.084294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.084495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.084529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.084711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.084744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.084985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.085018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.085212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.085254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.085428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.085461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.085637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.085670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.085854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.085888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.086005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.086037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.086303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.086376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.086518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.086555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.086677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.086712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.012 [2024-12-10 04:14:54.086932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.012 [2024-12-10 04:14:54.086964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.012 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.087140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.087190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.087322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.087356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.087614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.087648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.087777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.087811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.087932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.087963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.088088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.088120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.088313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.088359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.088555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.088589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.088831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.088864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.089051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.089093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.089302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.089337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.089517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.089551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.089677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.089709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.089886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.089919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.090035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.090068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.090185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.090220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.090408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.090442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.090657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.090691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.090825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.090858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.091107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.091141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.091331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.091367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.091612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.091644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.091814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.091848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.091974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.092006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.092143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.092184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.092398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.092430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.092606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.092639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.092763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.092797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.092989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.093024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.093153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.093198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.093326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.093359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.093472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.093505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.093633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.093665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.093789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.093823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.013 [2024-12-10 04:14:54.094005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.013 [2024-12-10 04:14:54.094038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.013 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.094233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.094278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.094508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.094582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.094820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.094893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.095125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.095161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.095380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.095414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.095587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.095620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.095729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.095761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.095893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.095928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.096113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.096145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.096375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.096408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.096697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.096730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.096847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.096879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.097016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.097049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.097181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.097217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.097420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.097465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.097577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.097610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.097874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.097907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.098019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.098051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.098258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.098293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.098536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.098570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.098695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.098728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.098967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.099001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.099242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.099276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.099388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.099421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.099545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.099578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.099749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.099782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.099969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.100001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.100109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.100144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.100339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.100373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.100486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.100518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.100695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.100729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.100842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.100874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.101044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.101076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.101199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.101233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.101407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.101440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.101615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.101649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.101851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.101884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.102146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.102193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.102321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.014 [2024-12-10 04:14:54.102355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.014 qpair failed and we were unable to recover it. 00:27:55.014 [2024-12-10 04:14:54.102552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.102585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.102761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.102795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.102947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.103022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.103204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.103277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.103417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.103456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.103715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.103747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.103880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.103916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.104110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.104144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.104272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.104306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.104572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.104611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.104735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.104769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.104884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.104918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.105040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.105073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.105256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.105291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.105557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.105591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.105781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.105827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.105962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.105997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.106191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.106228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.106492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.106527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.106722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.106757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.106966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.106999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.107114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.107147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.107429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.107474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.107664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.107697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.107874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.107908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.108030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.108064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.108185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.108220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.108490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.108523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.108664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.108699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.108852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.108891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.109026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.109060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.109255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.109291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.109428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.109462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.109647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.109681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.109810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.109842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.109949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.109983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.110113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.110146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.110362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.110395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.110606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.015 [2024-12-10 04:14:54.110640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.015 qpair failed and we were unable to recover it. 00:27:55.015 [2024-12-10 04:14:54.110885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.110918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.111131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.111164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.111310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.111344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.111589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.111663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.111814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.111857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.112058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.112093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.112215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.112250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.112355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.112387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.112508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.112541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.112718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.112751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.112934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.112969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.113081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.113114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.113336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.113370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.113615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.113648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.113822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.113855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.113977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.114010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.114295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.114329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.114526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.114559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.114675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.114708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.114827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.114860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.114990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.115023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.115264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.115300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.115541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.115574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.115682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.115715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.115894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.115927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.116103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.116137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.116274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.116313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.116455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.116489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.116669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.116702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.116874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.116907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.117013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.117051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.117187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.117222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.117397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.117431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.117561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.117595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.117881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.117915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.118025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.118058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.118257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.118291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.118397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.118430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.118550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.118584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.016 [2024-12-10 04:14:54.118688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.016 [2024-12-10 04:14:54.118720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.016 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.118843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.118876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.119079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.119111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.119322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.119357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.119658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.119691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.119940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.119973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.120184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.120218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.120360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.120393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.120522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.120554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.120751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.120785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.120987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.121020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.121201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.121235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.121515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.121548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.121755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.121790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.122029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.122063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.122302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.122337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.122470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.122501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.122635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.122668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.122872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.122907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.123212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.123256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.123458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.123495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.123624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.123658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.123880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.123913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.124102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.124137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.124270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.124304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.124544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.124579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.124688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.124721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.124843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.124877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.125052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.125085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.125207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.125243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.125369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.125402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.125522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.125565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.125736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.125770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.125963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.125997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.126184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.126220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.017 qpair failed and we were unable to recover it. 00:27:55.017 [2024-12-10 04:14:54.126394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.017 [2024-12-10 04:14:54.126427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.126538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.126572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.126776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.126809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.126995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.127028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.127250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.127286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.127550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.127584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.127760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.127793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.128042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.128077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.128204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.128239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.128379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.128413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.128606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.128640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.128833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.128866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.128988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.129023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.129210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.129244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.129369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.129403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.129523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.129556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.129675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.129708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.129946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.129980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.130097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.130131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.130326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.130361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.130621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.130655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.130833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.130866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.131000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.131034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.131216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.131253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.131441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.131474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.131686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.131721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.131894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.131929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.132174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.132208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.132325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.132358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.132565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.132599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.132790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.132823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.132942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.132975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.133128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.133161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.133290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.133324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.133515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.133549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.133807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.133841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.133965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.134004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.134130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.134163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.134440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.134475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-12-10 04:14:54.134649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-12-10 04:14:54.134682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.134797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.134830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.134938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.134971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.135098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.135131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.135268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.135303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.135443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.135478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.135591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.135625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.135808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.135841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.136033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.136068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.136259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.136295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.136467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.136502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.136615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.136649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.136781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.136813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.136927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.136962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.137131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.137164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.137368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.137403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.137573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.137607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.137794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.137827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.138073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.138107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.138258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.138293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.138405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.138439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.138646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.138679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.138865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.138899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.139076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.139110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.139260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.139297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.139404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.139449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.139681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.139714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.139890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.139923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.140060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.140094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.140318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.140353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.140485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.140518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.140633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.140666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.140873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.140907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.141082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.141116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.141381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.141415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.141542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.141577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.141694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.141726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.141980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.142021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.142205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.142240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.142372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-12-10 04:14:54.142406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-12-10 04:14:54.142610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.142643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.142764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.142797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.142993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.143027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.143158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.143211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.143397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.143432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.143611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.143645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.143905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.143937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.144135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.144179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.144370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.144404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.144595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.144627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.144795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.144828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.145010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.145045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.145216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.145251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.145374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.145407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.145606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.145640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.145824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.145858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.146054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.146087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.146326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.146361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.146535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.146569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.146689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.146723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.146834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.146867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.146981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.147014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.147271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.147308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.147600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.147633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.147825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.147859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.148119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.148154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.148279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.148312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.148600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.148633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.148854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.148890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.149016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.149049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.149230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.149265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.149385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.149419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.149551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.149585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.149758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.149791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.149962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.149996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.150111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.150145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.150414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.150448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.150586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.150626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.150768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-12-10 04:14:54.150801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-12-10 04:14:54.150994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.151027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.151243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.151278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.151468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.151502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.151688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.151722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.151894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.151927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.152110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.152144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.152264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.152299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.152486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.152520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.152655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.152689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.152816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.152850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.152955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.152988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.153182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.153218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.153436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.153470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.153640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.153674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.153777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.153812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.153984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.154017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.154198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.154234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.154424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.154457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.154656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.154691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.154824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.154858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.155033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.155066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.155254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.155289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.155418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.155453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.155562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.155596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.155714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.155747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.155929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.155962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.156140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.156183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.156293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.156326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.156434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.156468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.156655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.156688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.156878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.156912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.157080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.157113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.157260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.157294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.157422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.157458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-12-10 04:14:54.157694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-12-10 04:14:54.157727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.157913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.157946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.158129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.158163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.158306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.158340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.158457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.158497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.158620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.158653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.158849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.158882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.158992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.159025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.159150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.159206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.159325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.159357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.159486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.159522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.159720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.159753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.159882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.159918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.160038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.160073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.160312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.160349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.160449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.160482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.160604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.160638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.160777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.160810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.161007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.161043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.161248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.161288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.161412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.161447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.161623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.161655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.161779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.161813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.162050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.162083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.162265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.162299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.162482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.162516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.162619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.162652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.162767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.162800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.162940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.162973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.163147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.163191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.163328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.163361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.163481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.163515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.163723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.163757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.163897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.163931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.164070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.164103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.164338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.164372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.164504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.164537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.164677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.164712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.164832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.164866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-12-10 04:14:54.165056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-12-10 04:14:54.165090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.165359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.165394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.165604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.165637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.165763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.165797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.165923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.165957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.166080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.166120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.166251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.166287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.166489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.166522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.166694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.166728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.166945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.166979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.167147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.167198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.167454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.167487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.167727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.167760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.167891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.167924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.168038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.168071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.168259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.168293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.168484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.168517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.168696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.168730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.168931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.168965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.169155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.169203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.169390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.169423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.169618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.169652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.169871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.169904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.170025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.170058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.170178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.170211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.170479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.170512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.170698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.170731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.170975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.171008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.171189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.171223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.171358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.171391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.171596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.171631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.171764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.171797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.171923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.171955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.172082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.172113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.172234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.172268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.172404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.172437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.172732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.172765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.172874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.172908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.173146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.173188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-12-10 04:14:54.173308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-12-10 04:14:54.173340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.173478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.173510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.173732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.173765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.173942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.173975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.174085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.174117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.174320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.174354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.174543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.174583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.174762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.174793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.174903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.174934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.175105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.175138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.175346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.175380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.175553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.175586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.175704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.175736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.175917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.175950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.176162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.176209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.176334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.176366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.176607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.176640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.176823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.176855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.177031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.177064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.177281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.177317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.177452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.177485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.177587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.177619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.177728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.177761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.177965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.177997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.178185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.178220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.178488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.178522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.178648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.178682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.178878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.178910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.179128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.179160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.179344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.179378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.179506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.179539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.179754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.179789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.179970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.180004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.180124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.180157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.180355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.180387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.180506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.180542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.180776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.180810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.180997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.181030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.181222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.181257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.181442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-12-10 04:14:54.181474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-12-10 04:14:54.181649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.181682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.181856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.181889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.182000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.182033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.182241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.182276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.182535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.182569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.182739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.182772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.182970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.183009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.183143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.183202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.183387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.183421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.183592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.183625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.183816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.183849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.184104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.184137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.184336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.184370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.184493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.184527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.184731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.184763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.184882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.184915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.185131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.185175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.185293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.185325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.185567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.185600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.185781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.185815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.186006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.186038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.186212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.186246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.186432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.186467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.186593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.186625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.186749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.186782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.186907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.186940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.187099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.187131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.187328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.187363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.187490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.187523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.187698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.187732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.187852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.187887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.188061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.188095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.188312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.188347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.188469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.188503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.188611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.188649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.188832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.188865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.189055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.189089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.189336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.189371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.189612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-12-10 04:14:54.189645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-12-10 04:14:54.189825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.189859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.189981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.190015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.190243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.190278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.190397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.190430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.190631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.190666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.190856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.190889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.191069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.191104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.191242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.191283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.191467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.191501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.191676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.191709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.191838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.191872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.192076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.192110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.192234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.192269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.192453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.192487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.192599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.192633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.192754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.192788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.192903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.192937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.193112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.193145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.193407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.193441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.193552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.193587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.193709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.193743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.194015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.194050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.194231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.194266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.194446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.194481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.194617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.194651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.194767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.194801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.194938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.194972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.195163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.195221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.195340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.195374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.195557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.195590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.195798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.195832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.196070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.196103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.196226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.196261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.196452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.196486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.196736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.026 [2024-12-10 04:14:54.196811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-12-10 04:14:54.197012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.197050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.197180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.197215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.197354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.197388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.197594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.197627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.197809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.197844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.197962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.197995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.198127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.198160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.198456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.198490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.198626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.198662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.198876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.198909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.199159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.199205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.199313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.199345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.199479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.199513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.199628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.199662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.199803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.199836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.199963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.199997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.200188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.200223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.200328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.200361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.200536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.200568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.200747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.200779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.200884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.200917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.201042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.201077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.201262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.201297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.201478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.201510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.201771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.201804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.201925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.201958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.202130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.202178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.202301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.202335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.202453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.202487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.202696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.202728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.202907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.202941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.203074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.203106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.203234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.203269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.203372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.203405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.203577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.203611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.203733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.203765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.203973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.204011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.204139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.204186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.204294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.204327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.204435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.027 [2024-12-10 04:14:54.204470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-12-10 04:14:54.204615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.204649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.204759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.204794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.204980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.205015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.205189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.205227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.205357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.205391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.205501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.205533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.205768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.205802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.205915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.205948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.206127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.206160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.206350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.206385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.206508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.206541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.206664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.206697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.206880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.206912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.207108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.207148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.207342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.207375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.207494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.207528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.207684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.207718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.207825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.207858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.208056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.208088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.208208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.208244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.208364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.208398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.208577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.208611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.208799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.208833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.208943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.208981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.209108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.209142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.209576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.209611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.209867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.209901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.210097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.210131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.210362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.210397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.210629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.210663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.210904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.210939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.211210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.211245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.211426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.211459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.211702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.211775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.211975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.212034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.212165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.212219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.212436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.212475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.212623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.212656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.212834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.212868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-12-10 04:14:54.212984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.028 [2024-12-10 04:14:54.213017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.213278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.213326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.213514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.213548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.213676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.213725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.213906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.213940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.214063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.214099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.214205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.214240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.214427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.214461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.216245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.216302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.216513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.216549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.216739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.216774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.216914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.216947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.217068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.217102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.217228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.217262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.217394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.217429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.217629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.217663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.217857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.217891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.217996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.218029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.218157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.218202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.218329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.218362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.218483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.218516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.218621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.218655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.218829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.218862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.218975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.219009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.219145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.219189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.219377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.219411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.219625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.219659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.219766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.219799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.219933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.219973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.220103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.220136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.220375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.220433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.220699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.220735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.220852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.220886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.221071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.221105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.221293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.221328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.221519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.221553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.221668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.221701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.221867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.221900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.222010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.222045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.222156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.029 [2024-12-10 04:14:54.222203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.029 qpair failed and we were unable to recover it. 00:27:55.029 [2024-12-10 04:14:54.222314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.222349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.222619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.222672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.224033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.224087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.224305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.224340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.224526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.224561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.224800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.224832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.224962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.224993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.225137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.225199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.225310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.225344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.225462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.225495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.225607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.225639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.225758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.225793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.225911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.225945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.226059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.226093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.226221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.226255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.226377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.226417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.226600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.226633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.226884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.226916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.227048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.227081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.227328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.227363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.227546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.227580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.227759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.227792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.227920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.227956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.228143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.228186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.228307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.228340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.229663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.229709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.229957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.229988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.230104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.230135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.230259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.230290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.230476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.230507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.230747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.230776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.230965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.230997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.231106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.231137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.231380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.231412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.030 [2024-12-10 04:14:54.231586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.030 [2024-12-10 04:14:54.231617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.030 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.231721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.231751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.231925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.231955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.232099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.232133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.232271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.232305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.232486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.232518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.232696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.232729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.232852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.232885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.232997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.233036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.233151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.233194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.233383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.233417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.233599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.233632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.233765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.233798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.233921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.233951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.235240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.235288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.235573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.235605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.235719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.235750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.235891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.235925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.236155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.236202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.236378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.236410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.236513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.236545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.236663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.236694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.236822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.236852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.236957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.236987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.237121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.237151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.237283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.237314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.238554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.238602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.238890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.238921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.239189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.239222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.239387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.239413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.239587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.239612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.239720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.239743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.239847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.239873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.239990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.240017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.240133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.240158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.240260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.240286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.240411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.240438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.240545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.240569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.240679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.240705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.240817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.240843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.031 qpair failed and we were unable to recover it. 00:27:55.031 [2024-12-10 04:14:54.240951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.031 [2024-12-10 04:14:54.240977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.241072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.241098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.241208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.241236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.241359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.241384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.241490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.241516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.241627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.241653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.241755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.241780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.241878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.241903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.241997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.242023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.242266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.242340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.242613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.242686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.242892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.242964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.243091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.243121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.243314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.243343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.243452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.243477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.243578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.243605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.243723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.243750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.243853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.243880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.243986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.244014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.244114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.244140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.244344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.244377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.244551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.244585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.245729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.245771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.245901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.245929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.246201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.246230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.247611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.247656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.247800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.247827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.247930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.247955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.248125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.248150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.248272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.248297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.248470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.248496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.248609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.248635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.248798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.248824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.249028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.249054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.249215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.249243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.249351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.249375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.249567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.249597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.249709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.249733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.249823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.249848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.249974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.032 [2024-12-10 04:14:54.249999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.032 qpair failed and we were unable to recover it. 00:27:55.032 [2024-12-10 04:14:54.250097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.250121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.250253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.250282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.250386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.250410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.250597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.250621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.250721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.250745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.250844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.250867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.250978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.251001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.251203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.251228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.251330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.251354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.251507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.251531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.251629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.251654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.251771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.251794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.251899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.251924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.252089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.252113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.252397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.252422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.252541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.252565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.252719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.252743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.252965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.252989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.253224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.253250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.253410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.253434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.253591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.253616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.253765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.253789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.253894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.253918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.254030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.254059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.254222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.254246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.254351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.254375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.254500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.254524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.254748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.254772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.254945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.254968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.255074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.255098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.255211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.255236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.255437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.255463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.255640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.255664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.255830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.255855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.256067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.256091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.256205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.256231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.256403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.256428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.256548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.256573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.033 [2024-12-10 04:14:54.256682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.033 [2024-12-10 04:14:54.256706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.033 qpair failed and we were unable to recover it. 00:27:55.312 [2024-12-10 04:14:54.256894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.312 [2024-12-10 04:14:54.256918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.312 qpair failed and we were unable to recover it. 00:27:55.312 [2024-12-10 04:14:54.257008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.312 [2024-12-10 04:14:54.257031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.312 qpair failed and we were unable to recover it. 00:27:55.312 [2024-12-10 04:14:54.257198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.312 [2024-12-10 04:14:54.257223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.312 qpair failed and we were unable to recover it. 00:27:55.312 [2024-12-10 04:14:54.257405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.312 [2024-12-10 04:14:54.257430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.312 qpair failed and we were unable to recover it. 00:27:55.312 [2024-12-10 04:14:54.257585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.312 [2024-12-10 04:14:54.257610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.312 qpair failed and we were unable to recover it. 00:27:55.312 [2024-12-10 04:14:54.257714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.312 [2024-12-10 04:14:54.257739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.257915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.257940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.258134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.258159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.258332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.258358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.258459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.258484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.258598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.258624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.258777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.258805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.258918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.258942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.259025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.259050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.259145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.259187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.259383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.259413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.259532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.259559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.259672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.259700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.259953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.259980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.260091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.260118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.260240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.260269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.260360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.260389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.260552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.260581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.260769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.260796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.260996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.261025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.261211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.261240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.261420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.261448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.261575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.261604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.261733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.261762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.261948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.261976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.262087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.262115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.262243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.262272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.262431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.262458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.262625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.262655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.262769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.262796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.262900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.262930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.263094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.263122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.263272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.263303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.263401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.263428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.263545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.263573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.263679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.263707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.263873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.263900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.264079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.264107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.264227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.264257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.313 qpair failed and we were unable to recover it. 00:27:55.313 [2024-12-10 04:14:54.264423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.313 [2024-12-10 04:14:54.264451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.264554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.264582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.264679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.264707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.264879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.264907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.265098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.265127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.265246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.265275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.265441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.265470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.265585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.265613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.265715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.265744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.265838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.265866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.265981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.266010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.266109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.266138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.266311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.266342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.266641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.266669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.266773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.266801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.267026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.267056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.267153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.267222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.267326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.267353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.267448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.267476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.267670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.267698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.267797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.267825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.268057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.268086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.268276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.268306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.268425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.268453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.268627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.268654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.268904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.268932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.269104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.269133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.269316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.269346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.269525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.269554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.269676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.269714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.269888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.269918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.270186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.270217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.270332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.270362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.270533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.270564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.270680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.270710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.270828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.270864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.271043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.271076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.271242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.271274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.271384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.271414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.271592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.314 [2024-12-10 04:14:54.271624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.314 qpair failed and we were unable to recover it. 00:27:55.314 [2024-12-10 04:14:54.271810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.271840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.272029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.272061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.272187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.272218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.272393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.272424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.272600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.272629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.272890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.272921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.273027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.273061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.273266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.273298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.273402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.273433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.273678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.273710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.273943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.273973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.274144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.274192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.274454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.274484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.274651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.274682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.274970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.275000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.275114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.275144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.275347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.275379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.275492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.275523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.275699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.275731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.275827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.275858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.275982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.276012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.276127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.276158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.276344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.276379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.276548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.276579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.276678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.276708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.276898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.276929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.277176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.277208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.277444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.277473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.277646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.277678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.277841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.277871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.278045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.278074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.278266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.278298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.278470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.278500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.278683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.278714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.278889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.278919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.279094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.279123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.279267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.279299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.279547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.279576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.279841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.279871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.315 qpair failed and we were unable to recover it. 00:27:55.315 [2024-12-10 04:14:54.279982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.315 [2024-12-10 04:14:54.280012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.280129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.280159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.280366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.280396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.280507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.280537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.280743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.280775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.280961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.280994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.281115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.281148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.281388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.281424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.281709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.281742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.281966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.281999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.282206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.282247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.282366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.282400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.282585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.282617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.282879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.282911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.283104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.283136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.283330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.283363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.283537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.283569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.283779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.283813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.283998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.284030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.284220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.284254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.284497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.284530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.284644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.284678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.284919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.284951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.285236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.285270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.285393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.285428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.285640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.285674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.285846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.285878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.286080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.286113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.286364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.286398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.286569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.286602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.286774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.286807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.286918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.286949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.287116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.287147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.287410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.287444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.287621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.287655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.287845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.287878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.288006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.288040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.288220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.288254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.288453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.288485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.316 [2024-12-10 04:14:54.288591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.316 [2024-12-10 04:14:54.288624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.316 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.288819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.288852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.289051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.289083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.289349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.289383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.289573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.289607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.289782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.289815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.289951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.289983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.290156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.290221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.290463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.290496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.290696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.290729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.290835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.290867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.291055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.291088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.291272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.291313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.291554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.291586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.291826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.291858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.292040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.292073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.292263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.292298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.292417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.292450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.292632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.292665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.292791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.292823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.292994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.293027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.293236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.293270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.293401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.293434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.293640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.293673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.293786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.293818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.293998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.294032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.294223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.294258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.294442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.294475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.294598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.294630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.294830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.294863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.295035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.295068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.295256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.295291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.295499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.295533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.295660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.317 [2024-12-10 04:14:54.295694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.317 qpair failed and we were unable to recover it. 00:27:55.317 [2024-12-10 04:14:54.295817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.295850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.296104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.296138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.296355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.296389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.296591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.296624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.296835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.296867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.296998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.297036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.297234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.297269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.297452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.297484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.297651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.297684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.297800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.297833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.298004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.298036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.298203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.298237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.298359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.298392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.298510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.298543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.298717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.298750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.298857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.298890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.299006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.299040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.299151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.299192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.299305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.299338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.299522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.299556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.299729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.299762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.299968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.300001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.300187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.300222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.300464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.300497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.300623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.300655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.300839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.300872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.300992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.301026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.301161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.301203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.301383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.301416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.301525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.301558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.301768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.301801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.301910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.301942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.302072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.302111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.302325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.302360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.302569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.302601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.302807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.302840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.303038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.303071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.303241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.303275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.303407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.303440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.318 [2024-12-10 04:14:54.303649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.318 [2024-12-10 04:14:54.303682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.318 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.303801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.303834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.304022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.304055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.304187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.304222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.304343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.304376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.304566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.304600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.304711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.304744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.304945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.304979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.305086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.305119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.305324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.305359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.305599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.305632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.305762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.305795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.305975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.306008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.306200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.306235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.306383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.306416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.306606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.306639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.306774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.306809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.306912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.306944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.307059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.307092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.307300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.307335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.307543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.307575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.307709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.307743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.307847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.307881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.308067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.308100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.308342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.308376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.308562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.308596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.308865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.308898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.309028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.309061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.309249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.309284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.309460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.309493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.309622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.309655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.309785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.309818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.309926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.309958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.310082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.310115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.310313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.310385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.310647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.310684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.310863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.310897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.311012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.311045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.311159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.311203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.311315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.319 [2024-12-10 04:14:54.311347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.319 qpair failed and we were unable to recover it. 00:27:55.319 [2024-12-10 04:14:54.311564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.311597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.311769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.311802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.311923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.311956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.312073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.312107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.312302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.312336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.312461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.312495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.312624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.312657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.312781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.312823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.312931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.312964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.313141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.313187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.313427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.313459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.313568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.313601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.313789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.313822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.313942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.313974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.314190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.314226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.314401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.314433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.314535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.314568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.314759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.314792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.314918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.314950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.315057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.315090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.315272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.315307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.315449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.315482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.315654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.315687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.315812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.315844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.316033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.316065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.316190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.316224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.316416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.316449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.316715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.316748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.316864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.316896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.317000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.317033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.317137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.317177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.317370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.317403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.317629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.317662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.317778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.317810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.317989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.318022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.318217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.318251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.318362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.318395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.318518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.318551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.320 [2024-12-10 04:14:54.318737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.320 [2024-12-10 04:14:54.318770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.320 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.318875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.318908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.319093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.319125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.319258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.319291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.321039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.321099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.321269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.321307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.321446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.321479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.321586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.321618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.321813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.321846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.322046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.322079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.322261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.322297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.322536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.322568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.322678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.322711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.322843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.322875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.323103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.323136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.323261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.323295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.323427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.323458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.323569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.323601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.323728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.323761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.323880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.323912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.324028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.324061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.324178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.324212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.324326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.324358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.324536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.324569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.324738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.324772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.326073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.326125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.326417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.326454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.326647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.326680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.326860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.326894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.327041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.327074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.327202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.327238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.327362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.327395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.327519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.327553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.327690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.327724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.321 [2024-12-10 04:14:54.327902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.321 [2024-12-10 04:14:54.327935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.321 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.328113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.328147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.328275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.328316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.328519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.328552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.328726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.328760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.328948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.328981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.329095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.329127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.329347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.329382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.329526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.329559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.329670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.329703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.329824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.329857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.330099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.330133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.330271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.330306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.330491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.330524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.330627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.330661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.330771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.330804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.330941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.330972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.331159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.331203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.331318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.331347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.331443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.331474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.331663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.331696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.331872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.331905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.332024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.332056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.332216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.332248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.332369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.332403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.332504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.332533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.332657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.332685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.332920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.332951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.333056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.333086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.333263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.333296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.333408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.333440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.333572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.333602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.333712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.333742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.333854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.333886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.334013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.334044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.334153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.334191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.334313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.334345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.334468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.334498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.334608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.334638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.322 [2024-12-10 04:14:54.334742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.322 [2024-12-10 04:14:54.334772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.322 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.334935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.334966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.335089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.335119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.335347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.335387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.335514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.335545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.335734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.335765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.336002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.336034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.336157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.336206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.336320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.336353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.336532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.336566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.336699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.336733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.336850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.336881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.337047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.337077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.337198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.337231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.337353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.337384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.337495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.337526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.337705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.337735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.338930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.338977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.339195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.339230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.339432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.339464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.339643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.339676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.339812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.339846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.340022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.340056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.340246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.340277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.340420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.340449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.340632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.340665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.340813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.340858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.340973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.341003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.341189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.341222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.342690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.342738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.342942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.342972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.343098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.343127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.344335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.344381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.344543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.344570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.344697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.344724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.344914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.344941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.345114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.345143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.345343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.345373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.345490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.345519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.323 [2024-12-10 04:14:54.345762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.323 [2024-12-10 04:14:54.345791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.323 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.345915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.345942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.346048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.346077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.346249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.346279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.346456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.346490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.346608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.346636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.346767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.346797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.347060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.347089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.347190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.347220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.347471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.347498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.347652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.347679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.347881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.347910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.348037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.348067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.348326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.348359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.348576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.348609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.348737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.348769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.348968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.348998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.349118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.349161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.349439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.349474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.349702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.349736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.349908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.349937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.350111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.350144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.350376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.350409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.350538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.350571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.351820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.351864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.352186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.352217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.353302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.353343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.353574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.353602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.353708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.353737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.353933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.353959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.354139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.354182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.354418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.354453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.354568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.354600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.354804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.354837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.355098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.355131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.355301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.355327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.355434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.355460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.355585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.355613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.355719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.355746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.355865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.324 [2024-12-10 04:14:54.355891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.324 qpair failed and we were unable to recover it. 00:27:55.324 [2024-12-10 04:14:54.355987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.356012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.356106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.356132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.356257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.356284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.356453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.356480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.356649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.356681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.356872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.356906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.357113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.357148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.357316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.357350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.357531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.357565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.357766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.357800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.358010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.358042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.358241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.358269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.358377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.358404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.358504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.358531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.358748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.358774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.358901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.358927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.359199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.359229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.359333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.359361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.359552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.359579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.359695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.359719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.359839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.359865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.360065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.360100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.360279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.360313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.360434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.360468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.360707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.360741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.360937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.360969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.361241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.361276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.361479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.361512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.361797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.361831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.362019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.362052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.362184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.362227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.362418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.362452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.362670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.362703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.362902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.362935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.363080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.363113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.363347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.363382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.363521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.363555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.363676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.363708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.363918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.325 [2024-12-10 04:14:54.363952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.325 qpair failed and we were unable to recover it. 00:27:55.325 [2024-12-10 04:14:54.364142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.364216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.364437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.364472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.364733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.364768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.365017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.365049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.366362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.366411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.366667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.366708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.366914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.366948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.367250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.367286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.367483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.367517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.367650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.367683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.367960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.367993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.368186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.368221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.368407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.368441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.368556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.368589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.368783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.368815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.369008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.369041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.369282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.369317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.369507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.369540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.369717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.369753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.369960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.369994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.370246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.370285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.370486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.370519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.370648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.370682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.370963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.370998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.371285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.371321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.371586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.371622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.371950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.371984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.372102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.372137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.372347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.372381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.372503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.372536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.372827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.372860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.373044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.373077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.373272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.373308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.373489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.373522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.373762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.326 [2024-12-10 04:14:54.373797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.326 qpair failed and we were unable to recover it. 00:27:55.326 [2024-12-10 04:14:54.373916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.373950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.374078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.374111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.374314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.374349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.374614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.374648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.374860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.374893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.375032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.375066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.375202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.375238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.375384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.375417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.375603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.375636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.375874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.375907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.376165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.376214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.376366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.376398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.376602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.376636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.376905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.376937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.377200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.377235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.377414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.377448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.377577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.377612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.377905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.377939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.378062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.378096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.378274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.378310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.378433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.378466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.378595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.378628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.378757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.378791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.379056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.379090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.379367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.379404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.379614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.379647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.379787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.379820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.381186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.381238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.381399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.381431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.381641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.381677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.381988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.382022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.382218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.382253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.382445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.382479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.382664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.382699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.382841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.382874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.382982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.383018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.383141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.383211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.383412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.383448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.383663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.327 [2024-12-10 04:14:54.383697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.327 qpair failed and we were unable to recover it. 00:27:55.327 [2024-12-10 04:14:54.383943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.383977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.384164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.384210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.384344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.384378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.384499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.384533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.384749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.384783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.385023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.385056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.385261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.385298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.385418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.385451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.385590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.385625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.385801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.385834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.386072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.386104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.386335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.386377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.386639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.386673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.386929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.386964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.387157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.387199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.387408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.387442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.387620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.387655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.387905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.387938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.388217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.388252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.388449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.388482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.388619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.388652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.388890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.388923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.389177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.389212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.389332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.389364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.389603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.389637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.389995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.390029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.390203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.390238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.390490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.390523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.390642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.390675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.390927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.390959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.391255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.391290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.391405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.391437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.391590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.391623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.391804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.391836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.392018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.392051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.392180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.392214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.392406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.392440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.392616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.392648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.392915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.328 [2024-12-10 04:14:54.392988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.328 qpair failed and we were unable to recover it. 00:27:55.328 [2024-12-10 04:14:54.393204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.393244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.393529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.393564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.393829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.393863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.394045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.394078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.394321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.394356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.394549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.394582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.394835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.394868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.395110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.395142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.395391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.395427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.395656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.395689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.395938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.395970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.396236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.396271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.396563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.396597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.396787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.396821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.397001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.397034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.397271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.397305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.397546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.397578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.397754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.397787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.398030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.398065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.398266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.398300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.398451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.398484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.398760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.398794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.398984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.399016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.399260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.399294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.399534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.399567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.399704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.399737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.399866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.399901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.400186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.400219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.400405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.400438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.400632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.400665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.400849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.400881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.401094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.401136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.401351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.401385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.401523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.401556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.401680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.401713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.401918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.401951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.402131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.402164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.402375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.402408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.402528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.402561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.402690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.329 [2024-12-10 04:14:54.402723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.329 qpair failed and we were unable to recover it. 00:27:55.329 [2024-12-10 04:14:54.403005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.403039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.403290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.403326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.403501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.403534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.403728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.403761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.403951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.403983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.404223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.404258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.404503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.404537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.404732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.404766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.404895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.404928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.405183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.405217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.405412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.405445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.405587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.405621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.405809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.405842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.406102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.406142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.406430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.406463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.406585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.406618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.406857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.406889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.407067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.407100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.407341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.407376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.407496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.407529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.407734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.407768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.408008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.408042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.408291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.408325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.408566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.408599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.408856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.408890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.409131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.409164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.409367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.409400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.409578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.409611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.409734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.409767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.409951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.409984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.410246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.410281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.410465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.410497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.410697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.410730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.410993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.411026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.330 [2024-12-10 04:14:54.411299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.330 [2024-12-10 04:14:54.411334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.330 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.411532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.411566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.411804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.411837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.412056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.412088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.412314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.412349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.412504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.412538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.412802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.412841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.413087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.413120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.413323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.413358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.413541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.413574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.413835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.413868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.414061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.414094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.414298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.414334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.414521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.414553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.414675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.414709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.414965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.414997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.415197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.415232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.415426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.415459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.415663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.415696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.415918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.415951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.416256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.416292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.416557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.416591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.416730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.416764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.416942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.416976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.417101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.417133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.417269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.417304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.417542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.417577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.417771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.417804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.417989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.418023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.418143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.418184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.418381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.418414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.418558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.418591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.418772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.418805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.419072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.419106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.419395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.419430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.419611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.419645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.419915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.419948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.420190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.420225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.420354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.420388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.420627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.331 [2024-12-10 04:14:54.420661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.331 qpair failed and we were unable to recover it. 00:27:55.331 [2024-12-10 04:14:54.420908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.420941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.421115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.421148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.421302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.421336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.421476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.421510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.421647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.421679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.421864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.421898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.422108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.422141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.422423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.422459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.422646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.422679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.422956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.422990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.423196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.423231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.423408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.423440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.423652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.423685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.423892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.423925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.424077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.424111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.424325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.424359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.424553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.424588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.424722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.424755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.424959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.424993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.425127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.425161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.425299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.425333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.425462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.425495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.425669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.425702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.425964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.425997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.426249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.426284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.426461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.426494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.426620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.426654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.426778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.426811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.427002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.427036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.427277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.427312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.427577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.427611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.427846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.427878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.428083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.428117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.428311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.428345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.428474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.428513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.428704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.428738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.428935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.428969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.429212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.429247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.429379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.429412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.332 [2024-12-10 04:14:54.429533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.332 [2024-12-10 04:14:54.429565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.332 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.429745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.429777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.430009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.430042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.430312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.430346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.430479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.430513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.430647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.430680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.431026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.431059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.431199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.431234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.431477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.431509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.431703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.431736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.431918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.431952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.432207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.432242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.432444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.432478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.432610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.432643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.432929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.432962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.433229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.433263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.433375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.433409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.433604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.433637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.433787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.433821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.434094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.434128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.434307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.434340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.434547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.434581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.434803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.434842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.435087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.435120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.435407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.435442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.435577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.435609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.435886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.435920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.436194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.436229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.436434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.436467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.436727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.436760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.437017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.437050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.437236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.437270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.437468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.437502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.437785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.437818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.437965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.437998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.438243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.438278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.438476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.438510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.438705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.438738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.439010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.439043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.439230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.439264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.333 qpair failed and we were unable to recover it. 00:27:55.333 [2024-12-10 04:14:54.439407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.333 [2024-12-10 04:14:54.439441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.439635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.439669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.439862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.439894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.440159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.440204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.440409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.440442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.440640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.440672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.440919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.440952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.441145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.441188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.441332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.441365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.441562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.441607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.441910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.441943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.442181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.442216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.442399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.442432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.442635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.442668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.442805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.442837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.443043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.443076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.443297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.443332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.443465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.443497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.443640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.443672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.443920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.443953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.444253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.444286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.444487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.444521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.444726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.444759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.444977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.445012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.445212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.445247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.445448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.445482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.445730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.445764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.445965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.445999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.446127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.446160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.446290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.446323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.446501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.446534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.446727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.446760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.447034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.447068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.447264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.447301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.447549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.447581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.447777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.447812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.447940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.447974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.448204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.448239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.448388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.448422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.448652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.334 [2024-12-10 04:14:54.448685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.334 qpair failed and we were unable to recover it. 00:27:55.334 [2024-12-10 04:14:54.448946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.448979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.449174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.449210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.449492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.449525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.449710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.449744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.449954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.449988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.450237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.450272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.450411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.450445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.450587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.450621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.450739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.450772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.450972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.451006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.451210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.451246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.451436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.451469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.451735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.451770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.452042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.452076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.452206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.452240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.452361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.452394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.452624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.452658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.452783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.452817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.453014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.453047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.453261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.453294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.453477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.453512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.453713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.453746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.453948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.453982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.454162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.454207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.454359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.454394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.454586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.454619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.454884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.454918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.455119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.455152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.455364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.455399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.455601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.455635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.455825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.455859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.456129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.456163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.456356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.456391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.456525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.456559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.456701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.456735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.335 [2024-12-10 04:14:54.457019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.335 [2024-12-10 04:14:54.457052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.335 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.457278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.457313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.457564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.457603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.457740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.457773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.457984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.458017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.458214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.458249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.458430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.458464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.458655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.458689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.458885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.458918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.459112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.459145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.459293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.459328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.459462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.459496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.459748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.459782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.460049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.460082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.460272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.460308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.460517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.460550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.460755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.460789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.460982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.461016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.461236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.461272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.461453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.461486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.461617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.461650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.461911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.461943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.462137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.462180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.462370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.462404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.462602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.462636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.462764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.462798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.462979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.463012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.463144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.463186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.463330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.463364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.463605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.463644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.463962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.463997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.464302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.464337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.464609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.464643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.464794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.464828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.465049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.465083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.465315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.465350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.465462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.465495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.465684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.465720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.466052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.466086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.466291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.336 [2024-12-10 04:14:54.466327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.336 qpair failed and we were unable to recover it. 00:27:55.336 [2024-12-10 04:14:54.466544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.466579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.466694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.466727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.466944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.466978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.467177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.467212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.467351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.467384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.467585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.467618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.467750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.467784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.467923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.467956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.468157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.468201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.468474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.468509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.468706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.468739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.469015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.469049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.469305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.469341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.469544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.469578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.469722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.469757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.469957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.469991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.470198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.470233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.470507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.470542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.470736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.470769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.470980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.471014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.471256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.471291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.471447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.471480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.471684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.471719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.471916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.471950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.472205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.472240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.472446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.472481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.472748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.472783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.473056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.473090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.473306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.473342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.473545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.473579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.473788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.473823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.474051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.474086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.474292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.474328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.474539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.474573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.474695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.474731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.474999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.475033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.475190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.475227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.475354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.475387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.475590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.337 [2024-12-10 04:14:54.475625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.337 qpair failed and we were unable to recover it. 00:27:55.337 [2024-12-10 04:14:54.475861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.475896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.476080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.476114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.476353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.476390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.476594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.476628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.476760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.476795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.476946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.476981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.477217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.477255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.477471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.477505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.477696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.477731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.477998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.478033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.478327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.478363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.478518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.478554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.478683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.478718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.478955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.478989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.479249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.479286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.479426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.479461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.479586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.479622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.479923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.479957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.480253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.480296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.480483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.480517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.480647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.480681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.480816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.480849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.481098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.481133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.481313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.481349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.481490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.481525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.481672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.481707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.481959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.481995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.482257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.482294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.482515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.482548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.482707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.482741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.338 [2024-12-10 04:14:54.483012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.338 [2024-12-10 04:14:54.483046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.338 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.483320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.483355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.483524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.483560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.483689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.483723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.483999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.484033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.484179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.484215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.484408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.484442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.484588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.484623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.484781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.484816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.485084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.485119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.485312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.485347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.485474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.485509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.485715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.485750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.485874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.485908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.486128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.486162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.486408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.486449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.486609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.486643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.486885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.486921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.487154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.487200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.487345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.487381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.487584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.487619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.487945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.487979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.488214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.488251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.488413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.488448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.488573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.488607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.488729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.488764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.488979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.489014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.489149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.489193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.489402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.489437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.489666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.489702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.490018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.490053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.490370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.490406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.490600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.339 [2024-12-10 04:14:54.490636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.339 qpair failed and we were unable to recover it. 00:27:55.339 [2024-12-10 04:14:54.492194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.492253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.492432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.492465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.492622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.492656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.494693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.494759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.495067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.495105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.495282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.495318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.495482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.495517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.495669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.495704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.495901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.495935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.496084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.496127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.496437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.496473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.496634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.496669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.496971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.497006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.497133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.497177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.497319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.497352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.497551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.497586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.497885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.497920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.498136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.498199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.498397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.498432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.498591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.498625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.498787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.498822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.499057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.499092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.499630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.499672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.499997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.500032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.500285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.500321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.500476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.500512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.500703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.500737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.500954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.500989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.501138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.501181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.501331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.501365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.501513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.501547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.501881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.501915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.502178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.502221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.502380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.502414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.502622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.340 [2024-12-10 04:14:54.502655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.340 qpair failed and we were unable to recover it. 00:27:55.340 [2024-12-10 04:14:54.502810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.502844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.503123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.503158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.503316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.503351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.503558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.503592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.503749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.503783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.505280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.505339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.505527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.505561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.505701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.505733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.505883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.505916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.506119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.506154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.506370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.506405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.506554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.506589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.506753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.506787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.506935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.506968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.507105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.507139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.507308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.507345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.507462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.507497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.507696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.507730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.507893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.507928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.508067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.508101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.508252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.508287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.508417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.508451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.508595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.508629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.508849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.508883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.509080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.509116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.509258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.509294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.509422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.509456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.509592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.509626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.511526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.511594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.511792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.511828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.512033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.512067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.512349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.512386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.512508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.512543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.512727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.512761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.341 [2024-12-10 04:14:54.512964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.341 [2024-12-10 04:14:54.512997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.341 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.513143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.513217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.513420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.513455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.513654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.513688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.513894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.513928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.514113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.514149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.514290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.514325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.514457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.514491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.514685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.514722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.514836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.514866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.515054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.515089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.515237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.515284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.515497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.515526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.515704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.515734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.515850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.515881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.516060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.516094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.516216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.516252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.516455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.516490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.516640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.516669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.516797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.516827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.517071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.517101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.517239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.517270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.517393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.517424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.517619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.517665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.517855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.517890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.518023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.518058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.518187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.518221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.518361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.518391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.518523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.518552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.518657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.518687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.518930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.518966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.519101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.519135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.519329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.519363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.519473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.519502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.519709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.519744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.519859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.342 [2024-12-10 04:14:54.519902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.342 qpair failed and we were unable to recover it. 00:27:55.342 [2024-12-10 04:14:54.520121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.520155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.520304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.520335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.520446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.520475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.520723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.520757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.520951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.520986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.521099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.521134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.521371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.521404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.521519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.521549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.521673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.521701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.521810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.521839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.522012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.522040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.522215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.522246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.522460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.522495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.522631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.522665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.522803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.522837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.522966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.523001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.523130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.523164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.523307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.523336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.523519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.523553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.523685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.523718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.523915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.523949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.524141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.524186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.524299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.524332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.524459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.524492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.524603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.524636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.524823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.524857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.525076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.525111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.525323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.525359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.525490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.525523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.525738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.525771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.525987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.526021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.526231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.526270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.526470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.526505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.526622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.343 [2024-12-10 04:14:54.526657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.343 qpair failed and we were unable to recover it. 00:27:55.343 [2024-12-10 04:14:54.526800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.526833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.526968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.527002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.527192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.527226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.527356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.527391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.527523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.527557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.527814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.527847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.527966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.528001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.528107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.528140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.528285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.528320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.528522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.528555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.528750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.528784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.528916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.528950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.529128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.529161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.529294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.529328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.529539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.529574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.529712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.529745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.529947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.529981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.530099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.530132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.530362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.530397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.530614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.530648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.530777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.530811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.530946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.530979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.531124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.531158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.531284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.531319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.531467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.531501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.531698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.531731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.531853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.531888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.532004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.532038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.532205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.532241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.532373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.532407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.532601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.532634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.532819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.532854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.344 [2024-12-10 04:14:54.533057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.344 [2024-12-10 04:14:54.533091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.344 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.533225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.533266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.533456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.533490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.533628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.533662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.533801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.533835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.534056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.534091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.534232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.534268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.534381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.534414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.534615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.534650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.534901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.534935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.535063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.535097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.535235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.535270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.535405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.535439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.535706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.535790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.535938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.535976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.536133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.536187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.536380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.536414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.536527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.536559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.536681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.536714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.536892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.536926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.537058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.537091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.537239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.537275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.537414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.537449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.537596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.537630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.537756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.537791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.537984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.538020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.538159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.538201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.538384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.538418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.538636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.538676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.538798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.345 [2024-12-10 04:14:54.538832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.345 qpair failed and we were unable to recover it. 00:27:55.345 [2024-12-10 04:14:54.538974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.539006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.539198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.539233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.539352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.539385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.539574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.539608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.539724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.539758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.540004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.540037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.540159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.540205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.540333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.540367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.540489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.540523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.540704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.540738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.540876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.540910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.541047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.541081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.541224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.541260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.541450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.541484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.541662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.541695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.541812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.541846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.541977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.542011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.542209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.542245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.542387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.542421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.542563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.542597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.542718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.542751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.542930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.542964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.543093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.543126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.543327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.543362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.543545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.543580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.543778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.543819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.543942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.543975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.544078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.544112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.544239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.544273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.544464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.544499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.544622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.544655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.544838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.544871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.544982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.545021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.545204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.545239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.545509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.545543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.545662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.545695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.545883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.545915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.546112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.546145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.546369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.546404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.346 [2024-12-10 04:14:54.546557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.346 [2024-12-10 04:14:54.546591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.346 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.546708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.546742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.547032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.547066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.547251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.547287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.547557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.547590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.547772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.547806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.547956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.547990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.548176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.548212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.548373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.548407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.548598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.548632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.548912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.548946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.549218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.549254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.549402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.549434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.549683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.549723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.549993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.550028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.550259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.550294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.550472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.550509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.550710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.550745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.550881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.550914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.551124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.551156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.551280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.551315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.551566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.551599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.551736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.551769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.552035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.552069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.552266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.552301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.552448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.552482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.552617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.552651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.552974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.553009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.553139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.553183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.553367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.553400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.553613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.553647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.553784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.553818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.554080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.554114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.554394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.554429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.554639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.554673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.554957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.554991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.555270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.555307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.555504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.555537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.555773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.347 [2024-12-10 04:14:54.555807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.347 qpair failed and we were unable to recover it. 00:27:55.347 [2024-12-10 04:14:54.555986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.556020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.556268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.556317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.556531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.556565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.556715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.556748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.556872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.556906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.557162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.557203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.557384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.557418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.557560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.557594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.557867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.557901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.558204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.558239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.558518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.558552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.558755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.558790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.558974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.559007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.559228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.559264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.559412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.559444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.559702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.559780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.560025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.560063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.560291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.560330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.560526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.560560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.560863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.560897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.561158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.561202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.561360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.561394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.561599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.561632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.561890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.561924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.562202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.562238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.562375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.562408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.562680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.562714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.562934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.562969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.563224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.563269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.563469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.563503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.563669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.563702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.563841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.563875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.564127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.564160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.564386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.564420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.564623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.564657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.564862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.564895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.565112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.565147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.565315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.565349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.565561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.348 [2024-12-10 04:14:54.565596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.348 qpair failed and we were unable to recover it. 00:27:55.348 [2024-12-10 04:14:54.565738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.565772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.565923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.565957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.566230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.566265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.566526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.566560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.566705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.566738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.566999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.567032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.567221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.567256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.567462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.567495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.567687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.567721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.567921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.567955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.568288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.568323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.568478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.568512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.568648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.568682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.568829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.568863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.569110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.569144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.569327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.569362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.569570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.569651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.569804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.569842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.570060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.570095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.570329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.570367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.570572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.570607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.570817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.570851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.571102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.571136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.571308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.571345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.571465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.571499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.571642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.571674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.571888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.571922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.572206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.572241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.572375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.572409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.572545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.572588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.572744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.572778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.572995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.573029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.573272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.349 [2024-12-10 04:14:54.573308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.349 qpair failed and we were unable to recover it. 00:27:55.349 [2024-12-10 04:14:54.573452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.350 [2024-12-10 04:14:54.573486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.350 qpair failed and we were unable to recover it. 00:27:55.350 [2024-12-10 04:14:54.573679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.350 [2024-12-10 04:14:54.573713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.350 qpair failed and we were unable to recover it. 00:27:55.350 [2024-12-10 04:14:54.574048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.350 [2024-12-10 04:14:54.574082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.350 qpair failed and we were unable to recover it. 00:27:55.350 [2024-12-10 04:14:54.574356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.350 [2024-12-10 04:14:54.574392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.350 qpair failed and we were unable to recover it. 00:27:55.350 [2024-12-10 04:14:54.574610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.350 [2024-12-10 04:14:54.574644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.350 qpair failed and we were unable to recover it. 00:27:55.350 [2024-12-10 04:14:54.574865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.350 [2024-12-10 04:14:54.574898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.350 qpair failed and we were unable to recover it. 00:27:55.350 [2024-12-10 04:14:54.575149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.350 [2024-12-10 04:14:54.575192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.350 qpair failed and we were unable to recover it. 00:27:55.350 [2024-12-10 04:14:54.575344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.350 [2024-12-10 04:14:54.575379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.350 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.575585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.575620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.575900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.575934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.576220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.576256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.576532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.576566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.576699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.576734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.577021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.577055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.577278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.577314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.577542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.577576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.577865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.577900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.578113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.578147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.578320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.578355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.578497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.578531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.578722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.578755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.579033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.579066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.579326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.579363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.579506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.579542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.579741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.579775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.580056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.580091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.580285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.580321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.580448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.580482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.580622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.580657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.580812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.580846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.581030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.581065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.581191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.581225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.581484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.581519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.581661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.581697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.581901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.581935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.582129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.582162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.582406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.582446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.582647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.582679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-12-10 04:14:54.583035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-12-10 04:14:54.583071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.583286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.583324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.583538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.583573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.583729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.583766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.583915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.583952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.584159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.584205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.584347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.584382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.584591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.584628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.584767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.584801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.585044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.585083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.585296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.585333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.585546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.585597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.585962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.586013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.586247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.586298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.586528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.586564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.586774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.586809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.587076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.587115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.587331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.587369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.587512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.587560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.587732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.587786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.588061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.588141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.588411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.588454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.588686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.588721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.588983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.589025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.589223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.589262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.589478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.589514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.589742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.589780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.590062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.590098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.590336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.590371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.590576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.590612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.590744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.590779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.590920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.590956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.591210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.591246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.591398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.591432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.591634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.591669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.591896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.591930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.592137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.592190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.592346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.592382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.592583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.592628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-12-10 04:14:54.592869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-12-10 04:14:54.592904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.593159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.593202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.593353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.593387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.593587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.593622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.593767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.593801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.594028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.594067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.594287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.594323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.594547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.594583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.594803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.594837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.595045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.595082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.595310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.595346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.595468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.595503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.595716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.595751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.595979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.596014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.596205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.596240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.596428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.596462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.596644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.596679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.596967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.597028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.597176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.597213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.597449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.597484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.597788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.597825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.598028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.598062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.598203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.598241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.598415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.598451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.598674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.598710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.598954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.598989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.599324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.599391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.599547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.599588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.599781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.599818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.600071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.600106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.600328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.600368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.600505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.600543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.600750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.600785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.601060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.601096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.601324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.601363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.601643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.601678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.601975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.602010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.602211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.602252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.602509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.602544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-12-10 04:14:54.602853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-12-10 04:14:54.602896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.603082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.603119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.603319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.603356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.603563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.603597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.603886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.603922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.604126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.604161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.604457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.604493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.604754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.604788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.604998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.605032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.605233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.605270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.605408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.605442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.605647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.605682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.605881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.605917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.606119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.606155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.606387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.606424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.606619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.606652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.606847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.606881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.607158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.607212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.607376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.607410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.607633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.607668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.607866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.607900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.608034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.608069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.608287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.608324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.608587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.608622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.608887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.608922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.609204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.609241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.609522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.609556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.609834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.609874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.610158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.610213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.610413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.610449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.610654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.610690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.610851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.610887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.611145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.611193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.611481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.611516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.611811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.611846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.612058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.612094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.612336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.612374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.612572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.612608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.612810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.612846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-12-10 04:14:54.613043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-12-10 04:14:54.613078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.613396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.613434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.613665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.613701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.613913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.613948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.614163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.614212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.614437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.614475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.614671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.614705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.614830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.614865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.615124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.615161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.615343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.615379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.615499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.615536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.615739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.615774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.616033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.616068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.616294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.616332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.616591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.616626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.616946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.616983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.617191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.617230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.617440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.617477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.617730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.617764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.617961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.617998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.618193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.618231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.618547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.618583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.618820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.618854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.619099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.619132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.619343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.619379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.619596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.619632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.619858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.619893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.620079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.620112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.620331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.620375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.620603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.620639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.620780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.620814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.621032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.621068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.621256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.621294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.621572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-12-10 04:14:54.621604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-12-10 04:14:54.621890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.621923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.622206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.622241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.622498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.622528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.622641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.622672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.622805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.622838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.622962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.622992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.623190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.623223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.623499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.623531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.623756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.623787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.623995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.624026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.624218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.624251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.624529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.624564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.624840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.624873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.625133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.625180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.625470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.625504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.625717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.625749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.626027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.626060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.626258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.626293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.626492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.626524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.626721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.626753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.626941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.626975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.627274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.627310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.627509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.627543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.627732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.627770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.627975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.628011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.628270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.628307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.628443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.628477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.628765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.628800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.628915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.628950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.629146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.629196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.629404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.629439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.629682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.629718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.629927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.629962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.630099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.630132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.630282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.630326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.630538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.630575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.630853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.630886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.631020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.631057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.631246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-12-10 04:14:54.631283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-12-10 04:14:54.631433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.631468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.631769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.631803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.632043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.632079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.632290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.632327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.632549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.632583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.632785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.632821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.632976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.633011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.633219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.633256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.633443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.633479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.633669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.633704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.633890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.633925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.634108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.634142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.634463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.634500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.634781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.634816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.634958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.634992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.635133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.635193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.635325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.635359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.635470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.635504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.635624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.635660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.635792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.635826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.635941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.635975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.636186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.636222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.636424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.636458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.636663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.636697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.636888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.636921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.637154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.637203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.637457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.637490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.637709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.637743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.637860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.637895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.638150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.638197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.638393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.638430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.638617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.638652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.638854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.638892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.639102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.639138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.639359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.639394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.639677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.639719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.639918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.639952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.640258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.640297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.640574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.640610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-12-10 04:14:54.640771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-12-10 04:14:54.640807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.640946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.640983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.641175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.641211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.641407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.641441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.641577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.641613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.641816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.641852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.642123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.642158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.642422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.642458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.642664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.642698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.642908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.642941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.643156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.643200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.643403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.643437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.643629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.643662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.643859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.643893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.644094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.644128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.644264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.644300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.644504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.644540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.644830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.644864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.645136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.645182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.645310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.645346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.645546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.645579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.645798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.645832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.645956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.645992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.646309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.646345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.646627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.646662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.646945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.646979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.647230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.647266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.647468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.647503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.647818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.647854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.647986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.648022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.648156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.648200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.648480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.648514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.648714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.648748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.649046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.649080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.649341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.649377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.649600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.649634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.649845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.649886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.650145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.650189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.650384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-12-10 04:14:54.650418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-12-10 04:14:54.650622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.650656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.650919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.650952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.651135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.651181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.651453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.651489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.651747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.651780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.651979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.652015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.652199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.652235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.652371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.652406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.652631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.652664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.652850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.652885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.653145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.653188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.653426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.653461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.653692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.653725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.654003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.654040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.654322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.654360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.654635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.654669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.654817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.654853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.655051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.655085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.655349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.655387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.655652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.655688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.655955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.655989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.656191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.656227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.656426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.656459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.656737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.656770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.657007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.657041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.657186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.657224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.657519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.657553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.657766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.657800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.658079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.658114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.658431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.658466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.658731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.658765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.658966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.659001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.659222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.659258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.659396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.659432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.659686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.659721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.659913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.659948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.660222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.660257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.660372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.660413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.660604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-12-10 04:14:54.660638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-12-10 04:14:54.660934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.660968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.661178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.661213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.661468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.661503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.661804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.661839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.662043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.662078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.662295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.662331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.662510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.662543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.662771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.662806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.662945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.662979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.663254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.663290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.663505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.663540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.663834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.663868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.664077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.664115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.664390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.664424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.664587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.664622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.664899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.664933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.665119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.665154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.665442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.665478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.665703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.665739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.666039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.666074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.666282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.666318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.666514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.666549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.666678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.666712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.666928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.666962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.667188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.667224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.667426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.667460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.667661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.667695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.667897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.667930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.668118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.668154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.668429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.668464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.668745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.668782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.668895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.668930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.669112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.669148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.669378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-12-10 04:14:54.669415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-12-10 04:14:54.669541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.669576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.669853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.669888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.670146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.670191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.670437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.670471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.670678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.670718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.670907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.670943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.671143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.671201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.671421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.671456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.671688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.671724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.671919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.671955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.672072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.672106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.672318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.672353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.672652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.672685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.672959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.672993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.673208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.673245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.673522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.673556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.673759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.673793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.673917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.673952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.674236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.674271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.674485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.674518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.674654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.674690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.674998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.675032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.675228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.675263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.675562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.675596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.675799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.675832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.675977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.676012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.676288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.676323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.676579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.676614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.676920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.676953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.677069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.677103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.677313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.677348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.677664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.677699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.677902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.677937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.678220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.678257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.678535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.678570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.678775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.678809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.679087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.679123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.679420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.679456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.679719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-12-10 04:14:54.679752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-12-10 04:14:54.680051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.680084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.680317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.680352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.680537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.680571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.680828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.680862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.681001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.681035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.681186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.681228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.681506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.681539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.681815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.681849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.682039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.682073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.682258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.682295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.682498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.682531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.682717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.682752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.682991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.683025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.683155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.683198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.683384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.683418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.683689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.683722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.683855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.683890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.684070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.684104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.684382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.684419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.684635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.684670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.684883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.684917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.685186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.685221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.685501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.685535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.685762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.685796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.685989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.686023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.686226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.686264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.686461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.686495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.686751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.686784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.686991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.687026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.687217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.687252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.687525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.687559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.687883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.687918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.688198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.688280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.688543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.688583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.688811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.688848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.689122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.689162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.689461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.689499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.689646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.689680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.689867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.689902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-12-10 04:14:54.690122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-12-10 04:14:54.690159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.690427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.690466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.690724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.690761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.690968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.691006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.691201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.691245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.691410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.691456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.691664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.691700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.691926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.691964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.692148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.692191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.692453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.692498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.692646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.692680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.692878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.692921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.693110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.693148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.693286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.693322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.693514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.693551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.693809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.693845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.694128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.694165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.694701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.694737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.694995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.695033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.695222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.695258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.695577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.695618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.695890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.695925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.696059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.696095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.696351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.696389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.696613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.696646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.696900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.696935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.697144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.697189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.697397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.697432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.697642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.697676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.697797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.697833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.698020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.698055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.698366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.698405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.698690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.698723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.698912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.698949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.699265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.699301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.699523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.699559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.699685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.699719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.699999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.700032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.700250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.700287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-12-10 04:14:54.700511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-12-10 04:14:54.700545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.700731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.700766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.701024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.701060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.701258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.701295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.701577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.701610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.701831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.701867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.702120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.702155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.702453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.702488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.702755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.702789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.703089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.703124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.703375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.703410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.703561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.703595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.703848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.703883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.704088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.704122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.704411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.704448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.704633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.704667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.704936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.704969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.705156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.705204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.705464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.705498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.705711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.705746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.706027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.706061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.706335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.706378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.706579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.706613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.706769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.706804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.707080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.707114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.707270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.707307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.707572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.707607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.707735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.707770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.708026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.708061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.708263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.708300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.708500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.708534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.708725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.708760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.708892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.708926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.709128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.709163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.709379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-12-10 04:14:54.709413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-12-10 04:14:54.709637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.709672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.709808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.709843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.710145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.710200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.710403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.710436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.710718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.710754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.711005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.711039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.711290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.711328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.711450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.711484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.711687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.711721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.711923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.711959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.712218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.712254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.712463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.712498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.712760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.712794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.713054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.713089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.713279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.713314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.713592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.713627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.713822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.713856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.714043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.714077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.714203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.714239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.714535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.714570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.714687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.714721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.714942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.714978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.715257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.715295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.715437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.715472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.715750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.715786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.715911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.715945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.716198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.716239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.716443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.716477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.716682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.716719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.717020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.717055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.717238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.717274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.717555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.717590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.717793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.717827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.718020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.718054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.718323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.718360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.718543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.718577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.718763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.718797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.718947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.718980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.719263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.719298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-12-10 04:14:54.719431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-12-10 04:14:54.719465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.719609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.719643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.719923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.719960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.720153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.720200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.720506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.720542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.720745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.720779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.720999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.721034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.721239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.721275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.721462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.721498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.721680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.721714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.721846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.721882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.722065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.722100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.722374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.722409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.722609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.722644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.722840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.722877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.723071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.723104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.723298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.723334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.723480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.723514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.723718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.723752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.723873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.723909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.724112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.724147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.724342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.724379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.724652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.724686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.724962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.724998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.725239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.725275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.725460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.725496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.725628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.725664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.725865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.725906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.726109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.726143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.726306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.726339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.726546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.726581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.726785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.726819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.727008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.727044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.727183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.727218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.727496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.727532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.727642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.727676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.727790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.727824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.728031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.728064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.728247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.728282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-12-10 04:14:54.728391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-12-10 04:14:54.728425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.728616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.728652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.728847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.728882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.729135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.729180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.729293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.729327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.729515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.729549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.729832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.729866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.730153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.730198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.730413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.730447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.730699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.730735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.731003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.731037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.731239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.731278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.731539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.731573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.731764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.731797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.732049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.732085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.732395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.732432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.732648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.732681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.732948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.732982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.733097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.733131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.733404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.733440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.733654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.733689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.733978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.734014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.734201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.734236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.734351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.734384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.734527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.734562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.734753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.734786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.734997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.735032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.735256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.735292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.735491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.735536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.735719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.735754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.735873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.735907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.736040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.736075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.736261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.736297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.736463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.736496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.736774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.736808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.737003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.737036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.737294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.737328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.737528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.737561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.737834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.737867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-12-10 04:14:54.738055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-12-10 04:14:54.738088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.738348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.738384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.738570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.738606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.738727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.738760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.739040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.739074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.739329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.739366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.739650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.739683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.739807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.739841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.740121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.740155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.740417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.740454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.740727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.740762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.740906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.740942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.741164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.741225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.741343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.741378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.741654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.741687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.741870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.741907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.742113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.742150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.742364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.742400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.742676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.742712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.742994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.743029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.743225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.743261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.743470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.743506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.743756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.743792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.744097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.744131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.744269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.744305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.744580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.744616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.744727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.744762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.744886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.744919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.745146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.745202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.745333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.745373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.745627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.745661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.745941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.745976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.746180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.746215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.746441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.746477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.746662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-12-10 04:14:54.746698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-12-10 04:14:54.746905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.746942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.747080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.747113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.747258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.747295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.747570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.747605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.747740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.747775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.748030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.748066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.748342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.748381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.748616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.748649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.748851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.748886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.749081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.749116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.749398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.749433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.749736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.749771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.750009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.750044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.750260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.750297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.750552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.750587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.750776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.750811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.751009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.751043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.751187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.751221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.751406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.751441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.751640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.751676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.751954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.751989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.752150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.752197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.752393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.752427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.752609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.752644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.752848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.752881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.753158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.753217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.753521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.753557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.753809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.753843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.754125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.754159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.754441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.754476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.754673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.754708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.754896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.754930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.755190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.755228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.755432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.755466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.755749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.755791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.756068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.756103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.756400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.756437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.756636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.756671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-12-10 04:14:54.756876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-12-10 04:14:54.756913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.757052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.757087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.757293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.757329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.757593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.757627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.757813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.757850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.758129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.758163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.758363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.758399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.758679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.758714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.758993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.759027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.759224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.759262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.759549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.759584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.759721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.759756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.759891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.759927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.760145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.760188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.760386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.760420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.760719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.760753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.761019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.761053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.761280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.761317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.761504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.761539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.761671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.761705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.761931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.761964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.762228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.762263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.762486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.762521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.762802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.762837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.763088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.763122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.763427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.763463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.763738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.763772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.763949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.763984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.764095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.764129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.764344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.764380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.764637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.764671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.764946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.764980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.765203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.765239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.765518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.765553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.765744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.765779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.765961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.765994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.766194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.766236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.766499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.766533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.766715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.766749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-12-10 04:14:54.767024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-12-10 04:14:54.767058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.767251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.767285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.767589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.767623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.767880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.767914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.768114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.768148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.768359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.768394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.768671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.768705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.768886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.768920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.769188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.769223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.769443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.769477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.769609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.769642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.769949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.769984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.770197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.770232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.770438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.770472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.770657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.770692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.770974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.771008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.771221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.771256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.771383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.771417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.771671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.771705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.771893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.771926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.772184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.772220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.772484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.772518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.772818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.772852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.773118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.773153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.773358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.773393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.773590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.773623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.773896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.773931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.774181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.774216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.774421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.774455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.774648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.774682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.774863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.774897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.775182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.775218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.775478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.775512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.775800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.775835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.776106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.776140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.776416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.776452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.776665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.776699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.776969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.777010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.777231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.777267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-12-10 04:14:54.777571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-12-10 04:14:54.777606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.777807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.777841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.778036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.778071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.778346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.778382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.778663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.778697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.778971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.779005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.779209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.779245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.779489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.779524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.779824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.779858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.780143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.780187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.780456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.780491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.780698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.780732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.781008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.781043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.781325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.781360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.781640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.781674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.781896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.781931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.782056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.782090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.782359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.782395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.782595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.782628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.782769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.782804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.783081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.783115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.783342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.783378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.783633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.783667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.783968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.784002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.784190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.784226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.784437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.784471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.784719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.784752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.785052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.785086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.785354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.785390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.785613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.785647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.785918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.785952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.786152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.786198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.786447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.786482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.786755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.786789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.786988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.787023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.787267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.787301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.787492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.787527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.787777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.787812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.787997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.788041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.788318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-12-10 04:14:54.788353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-12-10 04:14:54.788632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.788665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.788940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.788974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.789259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.789296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.789525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.789559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.789742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.789776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.790063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.790096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.790281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.790317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.790630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.790664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.790940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.790974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.791227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.791262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.791447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.791481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.791764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.791798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.792007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.792040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.792226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.792262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.792514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.792548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.792846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.792879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.793080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.793114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.793402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.793437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.793695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.793729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.793949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.793983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.794237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.794272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.794401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.794435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.794689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.794722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.795022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.795058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.795320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.795355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.795563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.795597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.795853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.795888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.796143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.796185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.796486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.796520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.796778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.796812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.797115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.797148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.797441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.797476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.797656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.797690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.797983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.798016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-12-10 04:14:54.798237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-12-10 04:14:54.798273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.798572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.798606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.798869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.798903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.799035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.799068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.799269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.799310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.799504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.799537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.799842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.799876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.800138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.800180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.800438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.800471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.800680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.800714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.800896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.800929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.801112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.801145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.801360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.801395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.801591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.801624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.801809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.801842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.802029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.802061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.802333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.802369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.802638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.802672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.802867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.802900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.803105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.803139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.803426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.803463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.803740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.803774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.804052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.804086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.804358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.804394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.804535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.804568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.804821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.804855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.805070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.805105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.805394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.805429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.805613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.805647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.805825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.805859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.806111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.806146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.806444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.806481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.806750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.806784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.807068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.807103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.807379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.807415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.807676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.807710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.807894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.807929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.808112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.808145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.808363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.808399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-12-10 04:14:54.808652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-12-10 04:14:54.808687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.808917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.808951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.809077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.809111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.809341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.809376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.809653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.809687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.809871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.809911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.810183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.810219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.810479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.810513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.810735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.810769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.810965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.811000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.811197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.811232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.811526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.811560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.811843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.811877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.812090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.812124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.812336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.812371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.812497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.812531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.812719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.812753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.813000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.813034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.813289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.813324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.813584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.813619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.813833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.813866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.814142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.814185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.814411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.814446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.814640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.814673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.814947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.814981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.815190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.815225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.815426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.815461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.815645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.815679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.815931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.815966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.816220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.816256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.816473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.816507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.816654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.816689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.816962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.816997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.817201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.817237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.817445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.817478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.817779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.817814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.818060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.818094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.818382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.818417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.818564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.818598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-12-10 04:14:54.818822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-12-10 04:14:54.818856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.819135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.819179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.819395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.819429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.819708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.819742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.819892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.819925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.820187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.820224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.820447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.820487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.820693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.820727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.821006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.821040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.821292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.821327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.821581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.821615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.821813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.821847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.822117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.822152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.822383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.822418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.822670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.822704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.822930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.822963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.823220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.823256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.823380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.823413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.823611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.823645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.823781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.823814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.824096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.824129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.824343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.824377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.824679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.824713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.824838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.824871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.825151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.825204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.825508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.825542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.825766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.825800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.826057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.826091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.826387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.826423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.826621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.826655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.826929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.826969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.827184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.827221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.827368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.827402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.827790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.827873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.828092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.828130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.828296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.828334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.828612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.828646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.828798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.828832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.829111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.829146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-12-10 04:14:54.829411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-12-10 04:14:54.829446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.829658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.829692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.829965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.830000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.830189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.830225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.830426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.830461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.830719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.830752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.830963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.830996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.831126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.831160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.831466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.831502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.831744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.831778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.831963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.831997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.832274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.832310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.832570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.832604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.832926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.832962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.833216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.833251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.833457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.833490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.833693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.833727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.833868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.833904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.834185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.834223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.834518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.834562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.834847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.834887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.835154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.835211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.835481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.835517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.835769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.835804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.836014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.836055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.836242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.836283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.836510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.836544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.836693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.836726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.836941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.836978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.837240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.837276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.837459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.837496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.837641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.837677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.837875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.837914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.838180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.838216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.838428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.838465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-12-10 04:14:54.838619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-12-10 04:14:54.838655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.838937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.838971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.839259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.839298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.839444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.839479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.839666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.839701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.839931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.839969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.840182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.840219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.840448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.840487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.840766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.840811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.841027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.841060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.841214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.841252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.841533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.841566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.841846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.841880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.842087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.842127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.842326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.842362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.842501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.842536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.842839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.842875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.843104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.843138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.843393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.843429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.843686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.843721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.843849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.843885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.844163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.844207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.844421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.844455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.844760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.844796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.845067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.845101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.845299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.845336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.845520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.845555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.845834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.845869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.846194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.846229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.846510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.846544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.846759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.846794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.847080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.847114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.847384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.847420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.847708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.847743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.848035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.848070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.848325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.848362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.848607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.848641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.848965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.849000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.849261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.849298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.849598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-12-10 04:14:54.849639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-12-10 04:14:54.849842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.849878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.850078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.850113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.850404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.850440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.850713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.850750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.851034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.851069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.851347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.851383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.851589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.851625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.851828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.851862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.852051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.852085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.852366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.852402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.852608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.852649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.852931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.852967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.853203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.853240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.853529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.853563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.853852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.853888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.854188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.854226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.854504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.854538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.854747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.854781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.855015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.855051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.855195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.855233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.855431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.855465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.855737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.855773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.856058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.856095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.856367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.856405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.856600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.856637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.856785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.856821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.857016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.857050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.857247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.857282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.857562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.857602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.857788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.857822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.858087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.858123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.858401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.858439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.858578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.858613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.858839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.858882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.859194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.859233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.859466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.859501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.859776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.859812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.860029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.860064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.860274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.860310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-12-10 04:14:54.860455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-12-10 04:14:54.860490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.860792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.860831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.861042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.861085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.861359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.861398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.861700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.861740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.861903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.861943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.862158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.862226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.862515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.862551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.862817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.862852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.863119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.863153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.863363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.863402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.863645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.863685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.863905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.863940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.864229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.864267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.864538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.864573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.864859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.864895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.865182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.865221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.865359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.865395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.865695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.865731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.865936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.865973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.866256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.866293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.866504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.866541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.866729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.866766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.867042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.867079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.867348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.867385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.867631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.867667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.867838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.867876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.868133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.868181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.868457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.868495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.868613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.868653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.868959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.868997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.869304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.869345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.869485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.869520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.869798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.869835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.869976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.870011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.870144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.870190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.870405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.870444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.870646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.870681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.870820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.870855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.871058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.871097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-12-10 04:14:54.871234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-12-10 04:14:54.871269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.871390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.871426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.871619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.871653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.871957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.871994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.872229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.872265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.872572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.872610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.872884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.872917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.873131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.873182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.873455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.873492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.873764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.873798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.874015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.874051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.874245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.874283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.874412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.874449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.874722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.874757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.874946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.874982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.875109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.875144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.875360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.875408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.875621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.875658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.875856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.875890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.876065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.876100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.876314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.876350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.876551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.876584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.876789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.876825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.877090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.877128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.877367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.877404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.877611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.877646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.877916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.877952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.878256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.878293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.878554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.878592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.878873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.878910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.879185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.879223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.879483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.879519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.879772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.879808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.879991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.880031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.880319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.880358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.880647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.880685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.880950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.880990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.881211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.881252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.881456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.881491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.881679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-12-10 04:14:54.881714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-12-10 04:14:54.881999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.882040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.882323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.882360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.882576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.882611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.882796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.882833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.883029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.883067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.883255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.883291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.883416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.883454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.883640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.883674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.883823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.883861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.884071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.884111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.884336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.884372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.884656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.884697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.884899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.884945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.885206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.885243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.885436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.885471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.885687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.885721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.885978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.886012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.886208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.886244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.886548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.886585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.886773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.886807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.887101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.887139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.887419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.887456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.887646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.887681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.887861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.887896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.888194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.888233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.888442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.888476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.888721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.888762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.888885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.888922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.889062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.889095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.889405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.889442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.889596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.889632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.889826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.889865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.889997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.890033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.890154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.890232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.658 [2024-12-10 04:14:54.890444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.658 [2024-12-10 04:14:54.890478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.658 qpair failed and we were unable to recover it. 00:27:55.659 [2024-12-10 04:14:54.890634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.659 [2024-12-10 04:14:54.890669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.659 qpair failed and we were unable to recover it. 00:27:55.659 [2024-12-10 04:14:54.890922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.659 [2024-12-10 04:14:54.890958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.659 qpair failed and we were unable to recover it. 00:27:55.659 [2024-12-10 04:14:54.891155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.659 [2024-12-10 04:14:54.891207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.659 qpair failed and we were unable to recover it. 00:27:55.659 [2024-12-10 04:14:54.891396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.659 [2024-12-10 04:14:54.891430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.659 qpair failed and we were unable to recover it. 00:27:55.659 [2024-12-10 04:14:54.891639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.659 [2024-12-10 04:14:54.891673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.659 qpair failed and we were unable to recover it. 00:27:55.659 [2024-12-10 04:14:54.891884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.659 [2024-12-10 04:14:54.891920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.659 qpair failed and we were unable to recover it. 00:27:55.659 [2024-12-10 04:14:54.892042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.659 [2024-12-10 04:14:54.892079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.659 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.892291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.892327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.892458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.892495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.892624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.892665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.892884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.892920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.893198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.893236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.893457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.893491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.893680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.893714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.893904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.893941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.894141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.894202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.894392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.894427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.894611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.894645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.894899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.894932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.895192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.895230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.895494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.895529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.895666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.895701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.895920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.895954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.896244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.896279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.896558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.896593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.896846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.896882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-12-10 04:14:54.897075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-12-10 04:14:54.897109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.897332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.897368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.897571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.897606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.897893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.897927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.898201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.898239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.898357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.898392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.898669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.898705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.898905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.898941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.899129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.899164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.899430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.899466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.899653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.899693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.899903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.899939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.900151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.900197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.900456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.900491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.900676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.900712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.900846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.900880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.901191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.901227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.901480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.901515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.901718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.901752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.902022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.902056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.902312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.902348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.902641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.902677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.902890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.902925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.903138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.903197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.903467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.903501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.903633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.903667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.903922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.903956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.904227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.904262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.904521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.904555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.904854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.904888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.905182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.905218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.905421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.905456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.905660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.905695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.905947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.905981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.906208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.906244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.906511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.906545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.906739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.906773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.906967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.907002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.907265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.907301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.907577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.907611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.907792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.907826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.908024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.908058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.908261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.908297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.908599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.908634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.908836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.908870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.909052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.909087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.909269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.909304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.909524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.909558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.909813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.909848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.910125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.910160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.910379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.910413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.910670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.910705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.910963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.910997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.911191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.911228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.911450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.911485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.911739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.911773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.911980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.912015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.912305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.912341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.912618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.912652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-12-10 04:14:54.912927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-12-10 04:14:54.912961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.913225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.913261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.913488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.913522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.913730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.913765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.914067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.914101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.914373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.914408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.914669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.914704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.915000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.915033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.915302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.915339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.915625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.915659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.915954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.915988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.916125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.916159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.916309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.916344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.916526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.916560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.916756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.916790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.916972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.917006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.917263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.917298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.917479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.917514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.917716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.917750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.918019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.918059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.918256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.918292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.918518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.918552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.918750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.918784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.918972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.919006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.919138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.919184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.919465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.919499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.919773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.919807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.920088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.920123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.920319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.920355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.920654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.920688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.920887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.920921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.921112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.921145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.921370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.921405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.921611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.921646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.921902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.921936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.922231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.922268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.922487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.922521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.922704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.922738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.923002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.923037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.923225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.923261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.923445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.923479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.923761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.923795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.923934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.923968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.924222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.924257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.924554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.924589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.924796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.924830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.924965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.925011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.925224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.925260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.925447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.925481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.925619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.925653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.925913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.925948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.926150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.926226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.926540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.926574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.926798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.926832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.927034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.927069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.927370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-12-10 04:14:54.927406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-12-10 04:14:54.927612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.927647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.927929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.927964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.928246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.928281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.928560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.928594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.928789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.928824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.929065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.929099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.929321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.929357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.929561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.929596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.929904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.929939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.930181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.930216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.930531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.930566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.930839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.930874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.931128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.931162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.931472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.931507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.931778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.931812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.932102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.932137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.932414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.932449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.932662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.932702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.933011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.933045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.933289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.933325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.933520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.933554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.933776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.933810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.934115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.934150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.934449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.934484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.934694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.934728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.934983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.935018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.935207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.935242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.935523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.935558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.935835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.935869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.935991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.936025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.936278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.936315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.936525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.936560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.936820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.936855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.937152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.937200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.937462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.937496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.937770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.937804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.937932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-12-10 04:14:54.937967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-12-10 04:14:54.938185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-12-10 04:14:54.938220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.938404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.938438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.938741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.938775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.939033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.939068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.939347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.939382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.939580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.939613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.939801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.939835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.940117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.940151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.940352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.940388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.940667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.940701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.940963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.940997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.941315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.941350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.941634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.941669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.941945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.941978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.942236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.942272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.942487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.942521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.942777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.942812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.943094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.943129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.943350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.943386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.943572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.943605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.943830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.943865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.944049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.944088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.944235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.944271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.944528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.944562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.944844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.944878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.945097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.945131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.945345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.945379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.945601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.945635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.945834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.945868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.946124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.946158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.946437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.946472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.946586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.946620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.946814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.946848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.947032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.947067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.947290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.947326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.947565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.947600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.947874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.947908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.948217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.948254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.948534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.948568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.948828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.948863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.949139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.949183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.949458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.949493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.949704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.949738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.949941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.949974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.950195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.950232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.950443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.950477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.950754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.950787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-12-10 04:14:54.951083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-12-10 04:14:54.951118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.951426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.951467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.951661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.951695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.951950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.951983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.952187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.952223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.952442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.952476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.952746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.952780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.952985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.953021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.953205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.953240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.953437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.953471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.953726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.953760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.953966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.954000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.954292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.954328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.954600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.954634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.954914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.954948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.955235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.955270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.955546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.955581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.955871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.955905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.956054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.956088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.956369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.956405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.956554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.956587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.956868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.956902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.957087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.957121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.957388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.957423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.957539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.957573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.957824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.957858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.958129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.958164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.958320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.958354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.958473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.958513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.958768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.958802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.959090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.959124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.959429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.959465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.959601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.959635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.959835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.959869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.960091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.960125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.960301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.960336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.960612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.960646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.960952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.960986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.961204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.961240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.961503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.961537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.961734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.961768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.961961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.961995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.962256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.962292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.962570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.962604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.962884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.962918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.963186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.963221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.963349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.963384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.963584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-12-10 04:14:54.963618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-12-10 04:14:54.963802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.963836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.964089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.964123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.964338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.964373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.964583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.964617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.964871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.964905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.965036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.965071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.965345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.965381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.965566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.965600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.965867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.965902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.966187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.966223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.966498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.966533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.966814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.966848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.967074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.967108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.967423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.967459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.967736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.967770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.968056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.968089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.968344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.968379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.968692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.968726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.969025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.969059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.969264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.969300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.969519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.969553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.969832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.969866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.970050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.970085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.970337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.970373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.970648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.970683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.970938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.970972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.971207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.971243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.971503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.971538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.971674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.971708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.971848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.971882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.972032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.972067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.972345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.972380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.972585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.972619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.972819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.972854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.973129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.973163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.973451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.973486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.973766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.973800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.974083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.974117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.974401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.974437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.974644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.974678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.974955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.974990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.975186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.975222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.975436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.975470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.975748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.975782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.976061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.976096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.976329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.976365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.976593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.976627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-12-10 04:14:54.976827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-12-10 04:14:54.976861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.976987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.977027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.977294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.977329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.977634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.977670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.977928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.977961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.978102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.978136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.978295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.978330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.978611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.978646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.978922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.978956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.979236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.979272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.979556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.979589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.979866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.979899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.980191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.980228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.980497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.980532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.980717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.980752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.981025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.981060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.981181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.981217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.981500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.981534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.981798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.981832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.982108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.982142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.982366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.982401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.982655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.982689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.982891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.982925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.983140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.983183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.983472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.983507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.983709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.983742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.984035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.984069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.984360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.984396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.984519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.984559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.984858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.984891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.985164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.985209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.985414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.985449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.985725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.985759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.986064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.986098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.986355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.986390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.986644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.986678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.986936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-12-10 04:14:54.986970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-12-10 04:14:54.987159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.987204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.987422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.987456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.987645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.987679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.987995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.988030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.988291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.988329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.988473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.988507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.988722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.988757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.988938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.988972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.989103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.989137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.989427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.989462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.989666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.989699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.989878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.989913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.990187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.990222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.990503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.990536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.990816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.990850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.990997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.991031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.991293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.991328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.991580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.991614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.991825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.991865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.992150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.992196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.992408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.992443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.992695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.992729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.993028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.993062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.993208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.993244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.993430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.993464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.993737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.993772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.994058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.994093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.994369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.994404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.994690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.994724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.994998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.995032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.995259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.995295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.995577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.995611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.995770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.995805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.995999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.996033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.996288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.996322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.996457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.996491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.996706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.996740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.997014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.997048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.997307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.997344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.997619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.997653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.997935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.997969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.998253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.998289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.998566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.998600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.998878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.998913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.999095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.999129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.999320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.999354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.999642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.999677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:54.999881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:54.999915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:55.000188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:55.000224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:55.000422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:55.000456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:55.000655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:55.000690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:55.000966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:55.000999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:55.001284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-12-10 04:14:55.001320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-12-10 04:14:55.001539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.001573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.001824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.001858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.002157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.002201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.002485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.002519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.002772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.002805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.003073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.003107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.003382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.003425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.003710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.003744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.003932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.003966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.004204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.004240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.004442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.004476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.004759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.004793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.005092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.005126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.005417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.005453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.005724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.005758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.005945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.005980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.006253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.006292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.006553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.006587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.006882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.006916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.007190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.007226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.007514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.007549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.007755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.007788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.007991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.008024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.008302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.008337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.008561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.008594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.008781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.008815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.009010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.009044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.009296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.009332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.009522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.009556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.009832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.009865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.010057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.010092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.010354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.010390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.010595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.010628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.010759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.010799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.011059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.011093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.011278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.011313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.011508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.011542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.011791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.011824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.012128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.012161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.012450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.012484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.012744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.012778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.013033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.013067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.013254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.013290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.013547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.013581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.013823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.013856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.014120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.014155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.014354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.014389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.014678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.014713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.014985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.015018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-12-10 04:14:55.015148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-12-10 04:14:55.015196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.015401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.015435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.015620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.015653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.015901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.015935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.016081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.016114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.016397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.016432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.016564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.016599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.016853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.016886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.017165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.017212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.017486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.017520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.017801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.017834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.017964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.018004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.018199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.018235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.018420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.018453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.018654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.018687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.018868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.018902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.019116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.019149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.019407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.019441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.019702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.019736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.020034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.020069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.020333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.020368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.020649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.020683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.020967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.021001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.021220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.021256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.021532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.021565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.021787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.021821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.022104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.022138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.022421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.022456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.022642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.022675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.022882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.022915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.023182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.023217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.023421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.023455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.023641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.023675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.023900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.023933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.024129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.024162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.024468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.024503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.024633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.024667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.024798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.024831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.025103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.025137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.025340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.025375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.025494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.025527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.025781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.025814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.026087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.026121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.026336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.026371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.026655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.026688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.026871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.026905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.027099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.027133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.027326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.027361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.027637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.027671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.027937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.027971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.028266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-12-10 04:14:55.028302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-12-10 04:14:55.028595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.028629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.028900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.028934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.029137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.029183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.029415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.029449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.029724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.029757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.029961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.029995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.030187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.030235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.030443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.030477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.030751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.030785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.031037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.031071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.031289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.031324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.031527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.031561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.031807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.031841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.032042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.032075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.032282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.032317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.035344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.035381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.035684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.035718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.036063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.036097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.036450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.036486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.036793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.036826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.037079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.037113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.037420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.037454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.037710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.037744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.037998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.038032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.038283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.038318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.038521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.038555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.038759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.038793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.038986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.039021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.039229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.039274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.039534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.039569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.039793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.039826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.040078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.040112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.040424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.040459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.040743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.040776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.041055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.041089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.041373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.041409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.041627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-12-10 04:14:55.041660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-12-10 04:14:55.041922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.041956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.042184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.042219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.042413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.042447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.042701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.042735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.043081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.043114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.043340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.043375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.043650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.043684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.043865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.043900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.044153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.044206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.044504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.044538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.044803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.044837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.045133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.045179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.045476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.045510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.045801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.045834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.046105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.046139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.046297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.046331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.046584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.046617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.046769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.046802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.046985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.047025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.047279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.047316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.047539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.047573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.047732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.047766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.047913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.047947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.048129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.048162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.048459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.048494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.048741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.048774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.049029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.049062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.049262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.049298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.049435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.049468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.049687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.049721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.050000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.050034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.050286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.050322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.050629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.050663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.050813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.050846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.051026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.051060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.051256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.051290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.051470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.051504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.051781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.051815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.052029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.052062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.052339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.052374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.052590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.052624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.052807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.052840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.053022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.053057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.053310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.053345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.053600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.053633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.053908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-12-10 04:14:55.053948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-12-10 04:14:55.054271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.054305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.054487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.054522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.054791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.054825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.055005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.055039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.055253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.055288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.055495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.055529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.055805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.055840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.056124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.056157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.056381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.056416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.056670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.056703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.057001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.057035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.057259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.057294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.057545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.057580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.057836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.057871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.058003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.058036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.058162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.058206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.058414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.058448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.058713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.058746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.058885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.058919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.059201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.059236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.059489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.059523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.059770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.059804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.059919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.059953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.060207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.060242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.060522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.060556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.060838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.060871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.061058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.061091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.061244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.061280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.061491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.061525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.061731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.061764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.062039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.062073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.062374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.062409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.062672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.062706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.063007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.063040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.063239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.063274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.063502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.063536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.063814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.063847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.064132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.064176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.064410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.064443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.064632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.064666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.064862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.064902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.065084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.065119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.065383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.065418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.065601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.065633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.065834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.065869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.066122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.066156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.066463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.066497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.066721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.066754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.067050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-12-10 04:14:55.067083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-12-10 04:14:55.067296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.067331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.067534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.067568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.067775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.067809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.068058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.068092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.068283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.068318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.068580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.068614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.068895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.068928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.069129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.069163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.069431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.069465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.069719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.069752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.069947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.069981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.070094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.070129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.070339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.070374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.070577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.070611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.070886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.070920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.071119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.071153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.071420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.071455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.071739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.071774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.072070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.072109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.072403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.072438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.072701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.072735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.072924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.072958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.073186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.073222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.073472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.073506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.073741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.073775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.073960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.073994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.074187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.074223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.074494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.074528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.074675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.074708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.074909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.074944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.075132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.075182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.075457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.075491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.075643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.075678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.075932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.075966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.076223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.076259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.076474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.076507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.076785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.076819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.077086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.077120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.077323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.077358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.077555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.077589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.077862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.077895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.078035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.078069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.078267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.078303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.078493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.078526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.078712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.078747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-12-10 04:14:55.078963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-12-10 04:14:55.079003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.079279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.079315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.079518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.079553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.079830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.079865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.080054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.080088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.080226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.080261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.080513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.080547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.080733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.080767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.081049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.081083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.081352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.081388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.081610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.081643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.081945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.081980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.082185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.082221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.082431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.082465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.082607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.082641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.082845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.082880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.083194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.083231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.083554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.083588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.083818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.083850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.084082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.084118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.084350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.084387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.084576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.084610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.084851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.084886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.085068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.085102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.085379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.085415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.085645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.085681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.085931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.085965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.086157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-12-10 04:14:55.086205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-12-10 04:14:55.086459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.086493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.086625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.086660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.086846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.086880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.087076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.087110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.087405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.087443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.087584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.087618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.087918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.087958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.088150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.088200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.088408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.088444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.088698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.088732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.088935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.088969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.089179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.089216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.089518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.089558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.089838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.089872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.090096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.090129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.090283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.090321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.090506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.090539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.090842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.090879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.091091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.091129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.091419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.091456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.091681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.091718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.091998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.092034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.092320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.092360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.092630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.092668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.092950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.092984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.093194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.093232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.093457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.093491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.093716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.093752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.093976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.094009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.094203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.094240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.094374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.094409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.094688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.094723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.094917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.094951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.095151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.095197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.095470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.095505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.095707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.095741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.095940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.095973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.096250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.096288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.096491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.096525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.096780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.096815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.097023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.097065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.097290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.097326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.097602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.097639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.097842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.097877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.097999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.098033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.098294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.098334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.098610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.098645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.098973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.099010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.099263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.099299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.099602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.099636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-12-10 04:14:55.099864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-12-10 04:14:55.099901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.100207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.100245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.100540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.100575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.100852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.100888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.101116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.101152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.101486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.101523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.101711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.101745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.102023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.102058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.102268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.102303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.102512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.102548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.102806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.102844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.103036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.103071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.103348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.103384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.103670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.103705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.103939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.103973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.104200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.104238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.104530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.104567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.104781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.104823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.105033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.105067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.105343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.105379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.105590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.105625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.105784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.105822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.106033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.106069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.106322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.106360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.106662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.106696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.106949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.106986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.107274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.107311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.107505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.107539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.107726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.107760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.107955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.107989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.108265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.108301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.108439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.108475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.108676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.108713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.108983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.109018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.109211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.109247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.109460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.109494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.109798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.109832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.110024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.110059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.110322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.110359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.110566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.110599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.110804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.110837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.110954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.110990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.111248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.111282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.111485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.111522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.111636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.111672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.111894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.111931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.112048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.112082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.112349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.112385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.112639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.112672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.112985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.113020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.113275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.113326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.113609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-12-10 04:14:55.113645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-12-10 04:14:55.113889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.113925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.114204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.114241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.114528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.114562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.114787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.114821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.115103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.115141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.115441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.115477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.115683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.115720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.115926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.115962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.116229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.116270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.116555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.116593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.116803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.116837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.117104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.117139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.117351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.117388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.117591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.117626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.117906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.117944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.118189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.118228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.118486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.118521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.118799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.118835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.119157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.119214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.119405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.119447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.119711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.119745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.119953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.119989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.120270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.120306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.120580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.120621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.120907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.120950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.121204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.121241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.121460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.121496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.121823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.121857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.121989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.122026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.122250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.122284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.122546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.122583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.122794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.122829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.123023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.123057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.123263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.123307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.123561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.123594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.123861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.123897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.124214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.124252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.124533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.124569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.124782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.124817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.125098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.125138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.125421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.125456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.125754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.125788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.126000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.126035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.126304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.126339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.126546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-12-10 04:14:55.126585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-12-10 04:14:55.126843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.126879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.127064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.127097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.127318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.127354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.127537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.127573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.127765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.127798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.128099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.128133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.128352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.128390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.128588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.128621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.128904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.128937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.129243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.129281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.129585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.129620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.129903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.129939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.130221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.130256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.130441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.130476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.130735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.130769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.131025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.131066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.131249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.131284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.131472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.131505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.131781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.131814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.132037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.132071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.132278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.132314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.132501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.132535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.132740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.132776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.132955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.132991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.133202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.133239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.133479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.133513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.133712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.133753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.133944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.133979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.134254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.134289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.134582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.134617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.134771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.134805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.135007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.135043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.135260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.135294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.135551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-12-10 04:14:55.135585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-12-10 04:14:55.135867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.135902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.136027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.136061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.136202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.136237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.136494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.136529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.136718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.136755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.136877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.136912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.137192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.137228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.137433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.137466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.137586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.137627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.137831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.137867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.138053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.138089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.138274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.138311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.138533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.138567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.138772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.138807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.138943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.138979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.139281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.139317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.139504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.139539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.139677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.139715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.139930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.139969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.140209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.140248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.140458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.140494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.140687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.140721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.140936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.140978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.141124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.141182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.141436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.141472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.141686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.141723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.141923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.141958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.142159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.142205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.142478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.142513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.142827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.142867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.143138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.143185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.143453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.143488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.143744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.143779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.143987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.144021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.144309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.144347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.144598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.144632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.144859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.144895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.145079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.145113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.145378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.145415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.145712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.145750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.146026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.146060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.146363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.146402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.146664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.146699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.146907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.146945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.147287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.147325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.147534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.147569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.147758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.147792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.148046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.148084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.148389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.148428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.148652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-12-10 04:14:55.148693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-12-10 04:14:55.148908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.148944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.149197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.149234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.149434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.149472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.149663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.149701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.149898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.149936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.150137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.150185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.150403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.150440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.150646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.150679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.150969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.151006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.151234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.151274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.151420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.151457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.151666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.151703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.151961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.151995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.152287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.152329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.152527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.152566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.152793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.152829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.153082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.153117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.153282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.153319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.153611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.153649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.153903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.153942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.154150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.154200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.154457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.154492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.154765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.154801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.154992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.155031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.155219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.155257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.155469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.155508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.155692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.155733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.155964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.156000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.156205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.156242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.156448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.156483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.156643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.156678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.156808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.156847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.157038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.157074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.157279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.157317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.157521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.157556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.157811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.157847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.158072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.158111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.158338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.158376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.158585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.158620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.158810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.158846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.159037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.159072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.159367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.159404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.159660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.159697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.159998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.160035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.160336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.160374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.160503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.160542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.160814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.160850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.161127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.161162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.161314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.161352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-12-10 04:14:55.161550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-12-10 04:14:55.161585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.161798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.161834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.162026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.162066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.162370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.162407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.162627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.162674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.162984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.163017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.163252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.163289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.163422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.163458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.163621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.163667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.163924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.163961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.164084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.164120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.164356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.164392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.164581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.164618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.164854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.164891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.165075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.165109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.165345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.165381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.165597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.165632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.165858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.165891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.166048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.166084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.166351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.166388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.166604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.166640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.166847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.166881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.167101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.167136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.167275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.167312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.167429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.167463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.167686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.167723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.167838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.167873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.168061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.168096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.168428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.168464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.168648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.168681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.168875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.168909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.169113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.169148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.169431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.169468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.169658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.169692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.169875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.169909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.170111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.170147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.170340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.170375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.170581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-12-10 04:14:55.170616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-12-10 04:14:55.170835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.170871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.171021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.171057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.171290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.171325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.171455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.171496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.171781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.171817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.172005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.172039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.172308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.172347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.172470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.172507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.172642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.172679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.172881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.172916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.173119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.173159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.173460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.173496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.173780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.173814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.173964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.173999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.174253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.174291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.174581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.174615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.174821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.174857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.175137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.175183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.175472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.175505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.175625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.175662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.175814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.175854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.176077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.176113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.176322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.176360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.176654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.176691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.176822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.176859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.177141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.177191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.177500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.177537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.177745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.177780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.177897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.177934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.178219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.178260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.178467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.178504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.178625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.178660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.178865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.178906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.179193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.179234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.179519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.179563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.179769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.179805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.180070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.180109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.180336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.180374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.180611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.180650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.180911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.180948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.181243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.181283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.181493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.181530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.181829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.181869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.182082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.182118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.182353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.182389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.182598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.182633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.182864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.182901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.183213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.183252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.183464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.183501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.183777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.183815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.184090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.184124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.184359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.184397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.184583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.184620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.184829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.184864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.185133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.185183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.185464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.185501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-12-10 04:14:55.185617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-12-10 04:14:55.185652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.185849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.185885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.186011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.186045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.186312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.186356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.186638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.186675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.186965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.187016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.187239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.187278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.187506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.187540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.187749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.187784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.187926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.187962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.188085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.188121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.188367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.188413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.188624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.188659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.188779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.188815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.189031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.189070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.189236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.189276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.189463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.189499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.189791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.189831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.190049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.190085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.190242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.190278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.190558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.190595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.190894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.190935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.191221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.191262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.191422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.191458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.191736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.191771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.191976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.192012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.192221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.192259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.192473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.192509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.192637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.192671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.192938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.192972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.193182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.193218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.193418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.193453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.193650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.193695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.193910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.193947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.194137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.194181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.194448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.194485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.194705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.194749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.194969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.195006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.195124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.195159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.195380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.195415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.195691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.195726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.195955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.195997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.196135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.196208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.196491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.196527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.196720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.196758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.197017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.197056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.197329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.197368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.197654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.197693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.197903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.197946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.198132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.198179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.198376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.198411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.198621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.198656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.198941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.198979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.199184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.199227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.199553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.199590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.199800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.199835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.200045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.200082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.200326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-12-10 04:14:55.200369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-12-10 04:14:55.200607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-12-10 04:14:55.200641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-12-10 04:14:55.200851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-12-10 04:14:55.200889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:56.246 [2024-12-10 04:14:55.201109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-12-10 04:14:55.201146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-12-10 04:14:55.201350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-12-10 04:14:55.201385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-12-10 04:14:55.201571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-12-10 04:14:55.201606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-12-10 04:14:55.201859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-12-10 04:14:55.201904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-12-10 04:14:55.202126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-12-10 04:14:55.202163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-12-10 04:14:55.202321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-12-10 04:14:55.202358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.246 [2024-12-10 04:14:55.202500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.246 [2024-12-10 04:14:55.202535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.246 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.202766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.202803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.203009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.203044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.203260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.203299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.203439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.203473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.203621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.203657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.203790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.203825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.204025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.204070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.204210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.204247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.204369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.204403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.204522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.204567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.204701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.204733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.205008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.205045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.205245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.205283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.205479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.205519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.205714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.205751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.206051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.206087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.206303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.206340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.206474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.206508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.206646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.206681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.206802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.206837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.207046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.207086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.207366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.207405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.207662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.207697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.207894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.207928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.208189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.208225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.208361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.208399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.208543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.208577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.208698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.208733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.209042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.209084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.209366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.209402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.209547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.209581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.209779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.209815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.209946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.209980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.210197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.210240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.210510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.210544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.210800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.210834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.211077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.211112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.211307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.211345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.247 qpair failed and we were unable to recover it. 00:27:56.247 [2024-12-10 04:14:55.211540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.247 [2024-12-10 04:14:55.211574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.211852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.211890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.212144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.212208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.212412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.212446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.212733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.212768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.213031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.213068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.213272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.213308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.213586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.213620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.213806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.213845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.214062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.214099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.214223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.214259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.214521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.214555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.214810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.214852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.215144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.215195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.215432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.215468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.215711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.215750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.215891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.215926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.216204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.216245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.216517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.216554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.216812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.216849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.217132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.217181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.217388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.217424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.217664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.217704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.217965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.218006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.218194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.218233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.218370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.218405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.218681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.218715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.218922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.218957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.219238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.219275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.219482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.219518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.219705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.219741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.219861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.219895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.220095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.220130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.220357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.220397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.220707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.220743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.221022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.221058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.221269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.221306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.221496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.221532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.221815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.221849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.248 [2024-12-10 04:14:55.222149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.248 [2024-12-10 04:14:55.222201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.248 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.222389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.222425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.222623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.222657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.222912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.222948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.223156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.223205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.223477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.223512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.223713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.223747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.223951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.223986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.224238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.224275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.224459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.224494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.224792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.224826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.225020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.225055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.225333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.225369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.225564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.225598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.225721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.225755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.226035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.226070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.226280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.226316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.226595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.226629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.226913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.226948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.227232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.227267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.227447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.227485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.227743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.227779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.227981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.228015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.228311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.228347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.228475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.228510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.228687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.228723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.228855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.228890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.229089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.229124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.229424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.229461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.229720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.229754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.230024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.230059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.230351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.230387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.230501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.230535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.230838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.230872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.230987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.231022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.231317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.231353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.231484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.231519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.231728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.231763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.231982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.232017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.232299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.232335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.249 qpair failed and we were unable to recover it. 00:27:56.249 [2024-12-10 04:14:55.232523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.249 [2024-12-10 04:14:55.232557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.232778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.232812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.233075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.233110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.233406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.233441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.233663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.233698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.233975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.234009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.234269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.234305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.234530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.234565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.234749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.234783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.234986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.235020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.235220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.235255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.235509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.235549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.235742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.235777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.236052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.236086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.236370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.236405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.236683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.236717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.236973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.237009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.237313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.237349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.237634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.237668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.237884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.237919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.238200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.238258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.238525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.238559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.238832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.238867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.239009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.239043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.239252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.239288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.239601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.239636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.239913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.239947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.240157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.240205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.240483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.240517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.240791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.240825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.241110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.241144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.241350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.241385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.241635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.241669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.241847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.241881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.242134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.242181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.242462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.242496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.242717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.242751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.243002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.243036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.243316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.243359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.243658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.250 [2024-12-10 04:14:55.243692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.250 qpair failed and we were unable to recover it. 00:27:56.250 [2024-12-10 04:14:55.243953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.243987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.244268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.244304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.244578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.244612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.244827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.244861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.245092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.245126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.245417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.245452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.245655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.245689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.245959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.245994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.246187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.246222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.246425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.246460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.246648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.246682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.246959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.246993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.247302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.247338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.247526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.247560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.247784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.247818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.248121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.248155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.248363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.248398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.248613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.248647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.248868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.248903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.249187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.249223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.249447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.249481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.249759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.249793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.249989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.250024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.250324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.250360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.250665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.250700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.250967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.251008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.251257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.251293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.251480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.251515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.251768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.251802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.252078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.252112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.252445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.252481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.252697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.252732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.251 [2024-12-10 04:14:55.252916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.251 [2024-12-10 04:14:55.252951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.251 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.253225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.253261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.253546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.253580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.253856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.253890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.254029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.254064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.254280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.254316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.254572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.254607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.254897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.254932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.255044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.255078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.255334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.255370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.255584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.255618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.255878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.255913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.256060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.256094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.256276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.256311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.256501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.256534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.256818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.256852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.257151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.257199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.257334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.257368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.257650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.257684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.257957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.257992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.258216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.258252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.258464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.258500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.258778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.258812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.259096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.259131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.259407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.259443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.259630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.259665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.259944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.259978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.260158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.260206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.260411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.260446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.260701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.260735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.261028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.261062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.261270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.261306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.261488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.261523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.261732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.261766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.261897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.261932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.262158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.262202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.262436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.262470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.262672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.262705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.263007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.263041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.263307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.263342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.252 [2024-12-10 04:14:55.263536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.252 [2024-12-10 04:14:55.263570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.252 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.263782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.263816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.264108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.264142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.264368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.264403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.264532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.264567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.264845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.264880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.265140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.265185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.265474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.265508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.265774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.265809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.266012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.266046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.266361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.266398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.266532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.266566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.266781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.266815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.267103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.267137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.267465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.267501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.267716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.267750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.267938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.267972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.268248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.268284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.268421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.268455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.268750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.268784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.268968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.269003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.269260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.269301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.269503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.269538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.269791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.269826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.270083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.270118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.270383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.270419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.270711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.270745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.271020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.271055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.271249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.271284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.271569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.271604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.271835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.271869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.272143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.272194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.272467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.272502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.272727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.272762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.272875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.272910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.273214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.273249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.273452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.273487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.273788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.273822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.274082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.274116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.274409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.274445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.253 qpair failed and we were unable to recover it. 00:27:56.253 [2024-12-10 04:14:55.274736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.253 [2024-12-10 04:14:55.274770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.274978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.275013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.275268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.275304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.275509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.275544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.275818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.275851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.276158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.276203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.276483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.276518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.276714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.276748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.276933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.276973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.277186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.277222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.277425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.277459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.277758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.277793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.277981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.278015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.278300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.278335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.278601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.278636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.278762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.278796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.278978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.279011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.279310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.279345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.279542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.279578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.279802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.279837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.280112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.280147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.280434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.280469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.280749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.280783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.280984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.281019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.281276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.281312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.281436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.281470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.281652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.281686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.281896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.281930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.282204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.282240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.282541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.282575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.282837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.282870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.283122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.283157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.283396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.283430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.283624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.283659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.283957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.283991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.284279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.284316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.284510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.254 [2024-12-10 04:14:55.284544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.254 qpair failed and we were unable to recover it. 00:27:56.254 [2024-12-10 04:14:55.284733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.284766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.285041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.285076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.285261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.285296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.285577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.285611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.285890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.285923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.286113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.286147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.286461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.286496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.286772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.286806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.287090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.287125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.287424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.287460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.287742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.287776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.287923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.287957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.288219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.288256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.288559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.288593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.288866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.288901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.289188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.289225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.289432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.289466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.289744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.289778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.290040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.290075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.290373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.290408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.290645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.290679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.290951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.290987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.291194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.291229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.291412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.291446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.291723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.291757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.292024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.292058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.292357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.292393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.292680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.292715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.292928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.292962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.293220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.293255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.293371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.293406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.293625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.293660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.293933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.293967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.294180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.294215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.294473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.255 [2024-12-10 04:14:55.294507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.255 qpair failed and we were unable to recover it. 00:27:56.255 [2024-12-10 04:14:55.294790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.294824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.295125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.295159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.295445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.295480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.295755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.295790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.295921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.295962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.296235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.296271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.296464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.296499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.296794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.296828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.297052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.297088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.297355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.297392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.297625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.297659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.297902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.297936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.298139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.298182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.298436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.298470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.298749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.298784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.299000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.299034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.299320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.299357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.299502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.299536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.299823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.299857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.300039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.300074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.300337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.300374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.300651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.300686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.300926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.300960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.301182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.301219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.301481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.301516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.301808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.301843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.302041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.302076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.302378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.302413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.302606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.302640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.302821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.302855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.303126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.303161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.303438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.303478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.303626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.303660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.303961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.303996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.304197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.304233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.304437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.304472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.304773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.304809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.305007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.305042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.305323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.305358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.256 qpair failed and we were unable to recover it. 00:27:56.256 [2024-12-10 04:14:55.305659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.256 [2024-12-10 04:14:55.305694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.305902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.305936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.306144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.306202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.306392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.306427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.306704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.306738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.306997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.307032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.307292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.307328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.307629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.307664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.307784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.307818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.308067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.308101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.308298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.308334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.308531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.308565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.308764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.308799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.309072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.309106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.309417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.309454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.309699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.309734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.309941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.309976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.310198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.310234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.310516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.310550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.310822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.310868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.311179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.311215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.311403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.311438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.311719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.311753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.312033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.312067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.312293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.312329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.312606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.312641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.312841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.312876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.313092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.313126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.313416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.313452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.313671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.313706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.313979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.314013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.314213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.314249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.314486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.314521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.314722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.314756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.315033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.315068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.315291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.315327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.315521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.315555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.315805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.315839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.315977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.316011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.316266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.257 [2024-12-10 04:14:55.316302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.257 qpair failed and we were unable to recover it. 00:27:56.257 [2024-12-10 04:14:55.316489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.316523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.316723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.316759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.317032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.317066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.317200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.317235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.317457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.317492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.317719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.317753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.318029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.318063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.318298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.318335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.318528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.318562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.318764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.318799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.318935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.318969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.319247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.319283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.319434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.319468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.319674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.319708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.320011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.320046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.320327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.320363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.320497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.320532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.320731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.320766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.320990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.321025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.321305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.321340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.321615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.321650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.321870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.321905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.322213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.322248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.322530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.322566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.322765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.322799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.323054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.323089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.323316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.323352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.323587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.323621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.323828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.323862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.324066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.324101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.324378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.324415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.324698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.324732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.325005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.325038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.325333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.325369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.325621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.325657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.325952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.325987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.326236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.326271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.326575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.326609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.326899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.326934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.258 [2024-12-10 04:14:55.327123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.258 [2024-12-10 04:14:55.327157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.258 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.327364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.327399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.327591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.327626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.327903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.327937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.328216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.328252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.328536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.328571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.328848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.328882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.329159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.329203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.329409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.329449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.329650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.329684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.329955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.329989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.330250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.330286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.330499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.330533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.330787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.330821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.331126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.331161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.331432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.331466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.331716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.331750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.331977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.332012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.332285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.332320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.332576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.332611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.332743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.332777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.333035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.333069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.333282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.333317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.333519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.333554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.333850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.333884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.334014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.334049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.334256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.334292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.334588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.334624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.334887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.334924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.335123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.335179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.335490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.335526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.335727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.335762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.335872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.335906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.336130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.336180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.336339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.336379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.259 [2024-12-10 04:14:55.336605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.259 [2024-12-10 04:14:55.336652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.259 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.336949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.336983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.337203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.337237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.337361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.337398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.337707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.337746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.337883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.337919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.338220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.338257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.338460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.338494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.338750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.338784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.338969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.339006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.339264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.339303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.339582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.339616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.339916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.339950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.340213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.340249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.340459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.340493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.340750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.340785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.340971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.341007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.341259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.341296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.341552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.341586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.341806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.341841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.342029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.342065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.342363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.342401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.342663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.342697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.342895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.342931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.343135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.343194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.343405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.343441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.343562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.343598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.343797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.343831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.344119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.344154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.344389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.344424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.344696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.344731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.345015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.345052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.345295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.345333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.345535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.345569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.260 [2024-12-10 04:14:55.345786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.260 [2024-12-10 04:14:55.345820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.260 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.346005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.346040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.346308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.346344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.346619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.346655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.346921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.346956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.347103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.347137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.347405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.347440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.347646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.347683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.347965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.348000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.348188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.348223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.348503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.348537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.348828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.348862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.349050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.349087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.349230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.349267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.349471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.349505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.349791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.349826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.350034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.350071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.350295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.350333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.350563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.350597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.350818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.350852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.351128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.351163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.351398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.351435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.351692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.351726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.351943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.351977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.352184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.352221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.352365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.352400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.352672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.352707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.352909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.352943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.353152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.353203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.353388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.353422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.353675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.353711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.353912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.353947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.354130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.354164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.354398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.354434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.354738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.354779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.354983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.355018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.355181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.355217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.355409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.355444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.355648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.355682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.355895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.261 [2024-12-10 04:14:55.355930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.261 qpair failed and we were unable to recover it. 00:27:56.261 [2024-12-10 04:14:55.356162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.356211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.356515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.356550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.356758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.356792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.357067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.357101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.357267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.357303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.357523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.357564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.357825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.357858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.358129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.358162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.358325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.358360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.358561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.358595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.358805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.358839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.359098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.359133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.359359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.359396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.359606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.359642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.359912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.359945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.360252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.360287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.360482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.360517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.360718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.360753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.361027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.361062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.361365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.361400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.361596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.361630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.361778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.361818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.362079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.362114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.362335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.362372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.362638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.362672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.362800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.362834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.363046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.363080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.363418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.363455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.363649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.363683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.363797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.363832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.364102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.364137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.364423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.364459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.364738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.364774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.365056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.365092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.365276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.365313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.365525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.365561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.365772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.365808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.365950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.365986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.366188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.366222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.262 [2024-12-10 04:14:55.366505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.262 [2024-12-10 04:14:55.366539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.262 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.366748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.366783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.366986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.367021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.367249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.367287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.367562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.367595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.367875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.367910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.368195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.368230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.368508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.368541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.368757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.368791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.369084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.369124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.369384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.369423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.369628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.369665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.369849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.369886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.370082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.370115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.370406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.370443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.370704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.370739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.370974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.371006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.371213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.371250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.371400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.371434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.371652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.371688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.371903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.371937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.372140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.372198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.372385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.372420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.372703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.372736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.372932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.372966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.373190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.373226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.373486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.373520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.373707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.373740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.373959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.374001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.374224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.374263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.374520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.374557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.374692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.374727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.374860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.374896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.375101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.375137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.375451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.375489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.375768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.375804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.375961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.375998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.376148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.376199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.376406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.376442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.376646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.376681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.263 qpair failed and we were unable to recover it. 00:27:56.263 [2024-12-10 04:14:55.376961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.263 [2024-12-10 04:14:55.376997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.377125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.377159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.377358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.377393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.377584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.377618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.377750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.377784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.377973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.378008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.378137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.378185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.378410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.378446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.378725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.378759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.378962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.378997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.379206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.379243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.379432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.379467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.379717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.379752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.379955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.379989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.380217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.380254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.380378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.380413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.380537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.380572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.380754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.380789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.380984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.381019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.381140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.381189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.381386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.381424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.381621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.381659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.381864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.381908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.382202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.382242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.382460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.382496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.382684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.382722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.382933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.382972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.383104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.383138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.383415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.383452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.383597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.383637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.383965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.384004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.384143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.384195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.384383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.384420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.384706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.384741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.384881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.384918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.264 [2024-12-10 04:14:55.385116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.264 [2024-12-10 04:14:55.385202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.264 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.385435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.385473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.385690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.385735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.386023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.386059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.386337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.386374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.386524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.386557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.386760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.386794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.387000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.387032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.387229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.387262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.387453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.387486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.387784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.387817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.388044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.388079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.388345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.388374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.388595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.388622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.388815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.388844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.388993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.389022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.389268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.389300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.389490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.389522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.389741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.389776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.390040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.390074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.390393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.390425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.390632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.390664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.390939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.390976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.391267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.391305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.391568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.391603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.391914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.391945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.392204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.392237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.392517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.392551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.393798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.393850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.394080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.394110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.394327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.394359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.394609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.394640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.394820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.394852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.395147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.395187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.395392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.395424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.395603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.395635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.395767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.395798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.396025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.396054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.396253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.396290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.265 [2024-12-10 04:14:55.396572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.265 [2024-12-10 04:14:55.396607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.265 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.396755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.396789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.397037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.397074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.397274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.397327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.397549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.397579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.397716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.397745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.397882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.397913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.398185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.398215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.398358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.398388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.398634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.398666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.398842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.398871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.399085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.399109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.399282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.399307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.399560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.399583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.399819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.399844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.400026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.400050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.400269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.400294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.400565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.400590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.400809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.400842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.401128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.401162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.401471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.401508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.401764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.401799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.402011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.402046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.402227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.402263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.402483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.402517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.402792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.402827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.403114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.403150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.403355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.403390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.403593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.403618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.403861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.403896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.404155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.404202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.404403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.404428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.404600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.404634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.404829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.404864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.405050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.405084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.405311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.405347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.266 qpair failed and we were unable to recover it. 00:27:56.266 [2024-12-10 04:14:55.405600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.266 [2024-12-10 04:14:55.405625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.405902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.405937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.406142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.406203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.406486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.406521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.406727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.406753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.407002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.407028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.407193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.407219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.407413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.407454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.407747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.407783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.408057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.408091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.408379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.408415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.408692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.408728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.408926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.408961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.409220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.409256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.409516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.409551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.409858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.409895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.410268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.410306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.410511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.410546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.410804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.410838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.411029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.411064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.411255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.411293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.411434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.411468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.411686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.411720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.411924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.411958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.412233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.412269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.412477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.412511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.412657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.412691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.412888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.412922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.413139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.413182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.413386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.413421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.413555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.413589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.413812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.413847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.413985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.414020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.414155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.414218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.414504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.414539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.414814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.414848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.415157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.415203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.415392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.415425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.415625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.267 [2024-12-10 04:14:55.415659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.267 qpair failed and we were unable to recover it. 00:27:56.267 [2024-12-10 04:14:55.415963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.415998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.416190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.416226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.416350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.416385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.416577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.416612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.416846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.416880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.417158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.417204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.417331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.417365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.417546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.417581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.417885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.417925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.418183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.418219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.418409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.418442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.418699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.418733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.418923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.418957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.419142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.419188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.419396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.419430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.419655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.419691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.419826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.419859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.420066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.420101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.420248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.420283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.420535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.420569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.420891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.420926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.421213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.421250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.421396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.421430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.421683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.421717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.421917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.421951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.422132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.422175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.422396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.422430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.422683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.422718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.422987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.423021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.423230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.423266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.423450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.423485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.423687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.423721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.424004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.424037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.424315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.424351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.424557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.424591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.424801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.424835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.424988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.425021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.425158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.425214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.425448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.268 [2024-12-10 04:14:55.425482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.268 qpair failed and we were unable to recover it. 00:27:56.268 [2024-12-10 04:14:55.425665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.425699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.425833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.425867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.426129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.426162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.426304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.426339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.426548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.426581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.426716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.426750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.426958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.426993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.427274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.427311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.427503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.427536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.427797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.427838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.428101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.428134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.428274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.428309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.428565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.428599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.428797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.428831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.429016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.429050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.429255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.429292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.429514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.429548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.429752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.429786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.430020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.430054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.430199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.430235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.430429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.430463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.430656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.430691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.430887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.430921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.431207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.431244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.431519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.431553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.431688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.431722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.431919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.431953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.432219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.432254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.432435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.432469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.432738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.432773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.432956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.432989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.433124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.433158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.433374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.433409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.433635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.269 [2024-12-10 04:14:55.433669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.269 qpair failed and we were unable to recover it. 00:27:56.269 [2024-12-10 04:14:55.433890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.433925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.434116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.434150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.434438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.434474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.434679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.434713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.434970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.435004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.435195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.435232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.435485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.435519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.435647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.435681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.435884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.435918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.436180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.436215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.436470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.436504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.436754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.436788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.436971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.437005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.437148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.437192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.437413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.437447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.437668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.437709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.437876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.437909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.438128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.438162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.438396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.438429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.438628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.438662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.438862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.438897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.439090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.439123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.439269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.439304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.439486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.439520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.439718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.439752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.439945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.439980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.440121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.440154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.440439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.440475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.440679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.440713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.440977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.441011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.441136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.441191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.441339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.441372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.441568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.441602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.441804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.441838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.442118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.442152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.442305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.442340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.442563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.442598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.442847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.442883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.443069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.443102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.270 qpair failed and we were unable to recover it. 00:27:56.270 [2024-12-10 04:14:55.443364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.270 [2024-12-10 04:14:55.443400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.443663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.443697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.443955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.443989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.444223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.444260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.444467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.444501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.444640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.444674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.444853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.444887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.445022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.445056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.445200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.445237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.445503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.445537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.445662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.445697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.445945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.445979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.446282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.446317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.446521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.446555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.446809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.446843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.447060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.447094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.447208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.447250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.447451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.447485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.447763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.447797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.448056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.448091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.448351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.448388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.448519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.448552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.448781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.448815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.448996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.449031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.449217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.449253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.449529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.449563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.449798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.449832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.450032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.450066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.450251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.450287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.450475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.450508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.450727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.450762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.450887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.450921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.451142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.451190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.271 [2024-12-10 04:14:55.451468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.271 [2024-12-10 04:14:55.451501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.271 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.451635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.451669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.451789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.451823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.452029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.452063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.452334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.452371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.452574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.452607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.452794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.452828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.452968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.453003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.453212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.453249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.453373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.453407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.453537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.453572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.453753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.453786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.454053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.454087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.454286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.454321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.454514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.454548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.454665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.454698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.454915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.454949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.455143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.455183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.455378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.455413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.455605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.455639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.455834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.455868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.456009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.456043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.456229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.456264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.456405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.456444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.456571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.456605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.456797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.456830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.457018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.457052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.457276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.457312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.457503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.457538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.457734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.457770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.457893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.457926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.458158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.458204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.458399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.458434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.458563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.458597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.458869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.458903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.459086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.459119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.459324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.459359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.459565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.459598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.459783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.459816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.272 qpair failed and we were unable to recover it. 00:27:56.272 [2024-12-10 04:14:55.459952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.272 [2024-12-10 04:14:55.459985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.460190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.460226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.460345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.460379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.460572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.460605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.460737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.460770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.460886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.460921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.461203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.461238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.461366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.461401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.461593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.461628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.461763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.461797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.462090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.462124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.462329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.462364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.462581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.462615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.462753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.462786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.462908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.462942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.463136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.463178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.463365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.463399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.463585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.463618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.463748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.463782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.464049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.464082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.464214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.464249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.464450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.464484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.464680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.464714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.464824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.464857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.465041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.465080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.465221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.465257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.465484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.465519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.465717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.465751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.465892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.465925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.466114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.466148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.466363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.466396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.466591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.466624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.466922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.466956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.467180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.467215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.273 qpair failed and we were unable to recover it. 00:27:56.273 [2024-12-10 04:14:55.467481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.273 [2024-12-10 04:14:55.467514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.467825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.467859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.467998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.468031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.468224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.468259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.468466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.468500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.468686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.468719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.468827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.468860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.469070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.469104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.469237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.469273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.469448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.469482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.469601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.469635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.469909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.469943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.470079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.470113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.470233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.470268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.470391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.470425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.470543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.470576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.470764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.470798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.470991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.471025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.471204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.471239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.471417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.471451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.471575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.471609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.471902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.471936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.472061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.472095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.472276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.472312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.472493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.472525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.472651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.472685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.472810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.472844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.473060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.473094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.473306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.473341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.473466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.473499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.473696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.473737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.473928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.473961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.474093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.474126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.474283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.474318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.474498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.474531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.474724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.474757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.474893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.274 [2024-12-10 04:14:55.474927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.274 qpair failed and we were unable to recover it. 00:27:56.274 [2024-12-10 04:14:55.475105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.475138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.475335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.475370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.475640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.475673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.475791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.475825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.476070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.476103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.476293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.476329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.476592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.476626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.476810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.476844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.476976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.477010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.477218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.477254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.477464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.477498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.477699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.477733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.477847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.477880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.478076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.478111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.478386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.478421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.478624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.478657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.478835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.478869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.479068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.479101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.479317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.479352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.479566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.479600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.479724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.479762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.480005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.480039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.480180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.480214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.480332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.480367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.480558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.480591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.480793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.480827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.480971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.481006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.481129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.481162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.481455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.481489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.481662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.481695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.481869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.481902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.482038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.482071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.482261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.482297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.482564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.482597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.482731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.482765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.482885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.482919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.483118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.483152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.483365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.483398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.483596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.275 [2024-12-10 04:14:55.483630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.275 qpair failed and we were unable to recover it. 00:27:56.275 [2024-12-10 04:14:55.483741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.483775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.484045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.484078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.484207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.484243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.484512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.484546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.484665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.484698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.484844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.484877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.485119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.485153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.485454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.485489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.485681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.485715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.485902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.485936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.486116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.486149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.486292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.486327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.486534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.486567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.486779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.486813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.487103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.487137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.487359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.487393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.487657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.487690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.487808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.487841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.488054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.488088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.488291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.488327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.488582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.488615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.488904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.488943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.489135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.489177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.489376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.489410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.489680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.489714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.489894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.489927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.490046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.490079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.490196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.490232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.490421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.490453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.490647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.490681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.490887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.490920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.491057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.491091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.491335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.491370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.491548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.491580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.491752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.491787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.491919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.491952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.492150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.492192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.492383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.492416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.492661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.492696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.276 [2024-12-10 04:14:55.492941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.276 [2024-12-10 04:14:55.492975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.276 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.493157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.493213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.493471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.493505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.493685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.493718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.493907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.493940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.494114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.494148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.494410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.494444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.494643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.494676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.494864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.494897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.495100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.495135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.495330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.495364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.495610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.495643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.495827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.495861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.496044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.496077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.496281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.496316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.496588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.496621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.496767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.496799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.496932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.496965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.497142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.497184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.497367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.497400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.497529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.497563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.497743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.497776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.498018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.498058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.498299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.498333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.498478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.498511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.498638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.498671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.498846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.498878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.499065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.499098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.499291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.499344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.499517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.499550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.499676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.499709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.499903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.499935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.500054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.500087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.500299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.500334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.500459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.500492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.500757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.500791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.500974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.501006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.501123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.501157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.501292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.501326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.277 qpair failed and we were unable to recover it. 00:27:56.277 [2024-12-10 04:14:55.501511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.277 [2024-12-10 04:14:55.501544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.503001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.503058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.503294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.503332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.503517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.503551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.503796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.503832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.504096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.504130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.504336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.504371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.504641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.504675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.504870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.504903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.505084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.505117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.505283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.505319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.505523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.505556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.505801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.505835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.505944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.505978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.506159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.506218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.506334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.506367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.506483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.506516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.506765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.506799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.506987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.507020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.507217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.507253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.507362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.507395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.507517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.507550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.507735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.507768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.508033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.508073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.508379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.508414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.508555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.508588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.508775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.508808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.508931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.508964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.509099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.509133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.509283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.509317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.509588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.509621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.509795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.509828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.509932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.509965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.278 [2024-12-10 04:14:55.510181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.278 [2024-12-10 04:14:55.510216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.278 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.510404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.510438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.510578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.510611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.510787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.510821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.511021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.511055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.511174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.511208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.511414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.511447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.511577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.511610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.511793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.511826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.511950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.511983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.512221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.512255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.512370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.512402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.512590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.512624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.512864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.512899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.513074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.513110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.513245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.513279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.513386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.513419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.513602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.513635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.513751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.513784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.513956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.513990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.514119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.514153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.514339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.514373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.514586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.514619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.562 [2024-12-10 04:14:55.514856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-12-10 04:14:55.514889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.562 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.515009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.515042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.515155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.515197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.515326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.515358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.515562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.515595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.515709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.515742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.515916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.515949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.516215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.516257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.516517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.516549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.516680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.516713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.516958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.516991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.517099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.517133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.517391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.517429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.517581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.517613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.517888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.517922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.518180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.518215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.518480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.518514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.518692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.518725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.518987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.519021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.519242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.519278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.519522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.519556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.519697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.519730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.519940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.519973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.520178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.520212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.520348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.520381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.520494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.520527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.520813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.520846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.521023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.521056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.521230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.521266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.521398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.521431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.521547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.521580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.521764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.521796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.521915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.521948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.522069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.522101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.563 qpair failed and we were unable to recover it. 00:27:56.563 [2024-12-10 04:14:55.522315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.563 [2024-12-10 04:14:55.522349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.522522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.522556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.522726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.522759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.522900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.522933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.523143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.523186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.523371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.523404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.523594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.523626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.523745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.523776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.523897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.523927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.524041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.524072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.524256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.524290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.524427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.524458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.524632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.524663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.524918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.524955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.525072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.525104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.525296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.525329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.525499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.525529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.525632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.525662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.525785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.525817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.525954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.525986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.526211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.526244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.526421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.526454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.526565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.526597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.526809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.526842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.526962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.526995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.527190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.527224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.527329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.527361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.527506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.527539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.527651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.527683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.564 qpair failed and we were unable to recover it. 00:27:56.564 [2024-12-10 04:14:55.527826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.564 [2024-12-10 04:14:55.527861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.527997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.528030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.528198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.528233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.528353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.528389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.528566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.528598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.528806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.528839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.529042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.529075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.529188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.529223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.529412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.529445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.529724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.529757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.529936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.529968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.530078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.530111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.530242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.530276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.530469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.530503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.530619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.530652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.530893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.530926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.531047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.531080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.531192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.531226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.531512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.531544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.531667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.531700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.531891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.531924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.532049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.532081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.532216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.532250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.532432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.532464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.532636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.565 [2024-12-10 04:14:55.532674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.565 qpair failed and we were unable to recover it. 00:27:56.565 [2024-12-10 04:14:55.532914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.532947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.533134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.533181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.533380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.533412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.533641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.533674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.533845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.533878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.534052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.534085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.534204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.534239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.534410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.534443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.534615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.534648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.534767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.534800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.534914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.534948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.535066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.535098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.535310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.535344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.535536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.535570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.535765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.535798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.535971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.536004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.536214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.536248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.536497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.536531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.536650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.536682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.536857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.536889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.537008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.537041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.537176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.537210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.537415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.537448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.537708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.537741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.537869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.537901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.538027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.538059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.538256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.538292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.566 qpair failed and we were unable to recover it. 00:27:56.566 [2024-12-10 04:14:55.538487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.566 [2024-12-10 04:14:55.538520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.538648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.538681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.538874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.538906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.539022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.539055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.539309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.539343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.539534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.539566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.539766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.539799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.539903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.539936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.540043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.540075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.540267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.540300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.540489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.540522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.540789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.540821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.541005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.541043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.541151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.541194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.541456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.541489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.541622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.541655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.541786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.541819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.541925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.541957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.542078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.542111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.542248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.542282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.542404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.542436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.542650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.542683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.542940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.542972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.543095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.543129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.543244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.543278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.543384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.543417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.543602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.543635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.543817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.543850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.544090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.544122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.544336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.544370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.544625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.544657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.567 [2024-12-10 04:14:55.544907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.567 [2024-12-10 04:14:55.544940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.567 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.545115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.545148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.545331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.545365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.545548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.545580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.545702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.545735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.545870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.545902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.546098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.546131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.546277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.546312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.546502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.546535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.546645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.546677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.546883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.546916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.547018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.547050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.547153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.547197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.547306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.547339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.547452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.547485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.547615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.547648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.547755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.547789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.547962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.547994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.548238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.548273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.548398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.548431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.548638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.548671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.548844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.548882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.549066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.549099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.549274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.549308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.549493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.549526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.549743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.549775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.549905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.549938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.550128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.568 [2024-12-10 04:14:55.550161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.568 qpair failed and we were unable to recover it. 00:27:56.568 [2024-12-10 04:14:55.550314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.550348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.550470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.550503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.550684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.550718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.550966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.550998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.551186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.551220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.551474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.551507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.551638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.551670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.551785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.551819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.553221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.553277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.553586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.553620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.553860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.553894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.554081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.554116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.554295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.554330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.554458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.554492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.554629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.554663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.554771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.554807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.554913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.554947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.555190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.555225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.555417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.555450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.555561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.555594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.555835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.555910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.556062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.556099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.556378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.556413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.556553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.556588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.556726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.556759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.556950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.556982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.557148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.557192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.557323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.557355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.557597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.557631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.557752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.557784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.557958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.557991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.558245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.558279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.558461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.558493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.558687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.558720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.558995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.559028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.559266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.559300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.559405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.559437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.559574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.559606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.559733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.569 [2024-12-10 04:14:55.559765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.569 qpair failed and we were unable to recover it. 00:27:56.569 [2024-12-10 04:14:55.559934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.559968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.560137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.560181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.560361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.560394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.560574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.560607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.560807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.560841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.561043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.561076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.561209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.561244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.561370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.561403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.561518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.561557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.561747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.561780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.561902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.561935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.562120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.562153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.562267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.562300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.562471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.562504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.562614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.562648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.562818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.562850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.563029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.563062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.563185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.563220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.563392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.563423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.563615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.563647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.563760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.563793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.563901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.563933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.564121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.564155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.564339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.564373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.564563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.564596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.564780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.564814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.564936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.564969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.565231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.565265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.565455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.565489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.565597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.565630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.565762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.565794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.565915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.565949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.566187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.566222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.566410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.566443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.566554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.566587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.566767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.566805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.566949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.566992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.567102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.567135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.570 qpair failed and we were unable to recover it. 00:27:56.570 [2024-12-10 04:14:55.567273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.570 [2024-12-10 04:14:55.567306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.567418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.567452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.567586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 215909 Killed "${NVMF_APP[@]}" "$@" 00:27:56.571 [2024-12-10 04:14:55.567618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.567746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.567778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.567958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.567990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.568103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.568137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:56.571 [2024-12-10 04:14:55.568346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.568379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.568649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.568682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:56.571 [2024-12-10 04:14:55.568805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.568837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:56.571 [2024-12-10 04:14:55.569032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.569065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.569204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.569238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.571 [2024-12-10 04:14:55.569355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.569388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.569499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.569532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.571 [2024-12-10 04:14:55.569714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.569747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.569870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.569903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.570030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.570063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.570238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.570273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.570442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.570474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.570728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.570762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.570952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.570985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.571155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.571198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.571410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.571443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.571637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.571671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.571860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.571893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.572003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.572036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.572246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.572279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.572540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.572573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.572840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.572873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.572993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.573026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.573253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.573288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.573409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.573441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.573554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.573586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.573832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.573864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.574108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.574141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.574324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.574358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.574533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.574571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.574689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.571 [2024-12-10 04:14:55.574721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.571 qpair failed and we were unable to recover it. 00:27:56.571 [2024-12-10 04:14:55.574848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.574881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.575006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.575039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.575142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.575182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.575357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.575389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.575561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.575595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.575767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.575800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.575916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.575949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.576143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.576185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=216620 00:27:56.572 [2024-12-10 04:14:55.576372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.576406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.576516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.576548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 216620 00:27:56.572 [2024-12-10 04:14:55.576659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.576692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.576877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.576909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 216620 ']' 00:27:56.572 [2024-12-10 04:14:55.577081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.577115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.572 [2024-12-10 04:14:55.577372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.577407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.572 [2024-12-10 04:14:55.577665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.577698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.577835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.577868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.572 [2024-12-10 04:14:55.577997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.578030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.578223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.572 [2024-12-10 04:14:55.578258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.578372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.578403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.572 [2024-12-10 04:14:55.578517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.578553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.578661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.578693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.578801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.578834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.578940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.578973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.579150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.579193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.579299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.579332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.579509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.579542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.579731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.579763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.579932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.579965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.580209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.580243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.572 [2024-12-10 04:14:55.580366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.572 [2024-12-10 04:14:55.580398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.572 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.580584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.580618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.580740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.580772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.581078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.581110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.581338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.581373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.581484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.581523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.581717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.581749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.581922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.581954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.582140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.582184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.582366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.582396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.582662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.582694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.582979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.583011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.583130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.583162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.583289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.583322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.583457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.583490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.583679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.583712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.583890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.583923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.584096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.584129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.584263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.584297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.584407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.584440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.584677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.584711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.584973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.585006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.585121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.585156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.585298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.585332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.585525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.585558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.585661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.585693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.585873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.585905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.586027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.586059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.586330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.586371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.586560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.586593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.586771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.586804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.587053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.587086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.587296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.587344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.587453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.587485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.587601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.587634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.587805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.587838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.588078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.588111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.588304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.588338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.588464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.588497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.588607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.573 [2024-12-10 04:14:55.588639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.573 qpair failed and we were unable to recover it. 00:27:56.573 [2024-12-10 04:14:55.588831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.588866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.589158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.589202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.589319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.589359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.589477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.589512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.589692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.589724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.589832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.589866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.590053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.590089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.590296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.590333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.590466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.590502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.590694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.590734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.590859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.590892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.591011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.591047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.591242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.591278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.591409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.591443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.591631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.591664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.591778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.591813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.591991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.592031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.592158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.592207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.592329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.592366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.592576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.592617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.592761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.592795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.592995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.593027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.593153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.593197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.593394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.593427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.593616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.593650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.593803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.593839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.594023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.594057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.594228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.594265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.594530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.594563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.594746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.594780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.594897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.594931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.595131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.595180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.595395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.595428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.595612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.595649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.595841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.595874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.596064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.596098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.596230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.596264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.596536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.596573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.596760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.574 [2024-12-10 04:14:55.596797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.574 qpair failed and we were unable to recover it. 00:27:56.574 [2024-12-10 04:14:55.596988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.597021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.597190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.597225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.597342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.597375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.597486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.597519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.597759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.597795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.598000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.598034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.598247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.598282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.598454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.598488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.598606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.598640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.598790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.598824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.599022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.599059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.599208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.599248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.599463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.599497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.599685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.599721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.599837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.599871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.599983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.600015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.600122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.600158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.600307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.600344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.600536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.600580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.600772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.600805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.600924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.600958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.601104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.601138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.601344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.601379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.601622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.601654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.601782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.601812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.602000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.602034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.602139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.602179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.602372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.602404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.602582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.602612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.602813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.602844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.603016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.603047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.603229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.603263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.603448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.575 [2024-12-10 04:14:55.603481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.575 qpair failed and we were unable to recover it. 00:27:56.575 [2024-12-10 04:14:55.603608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.603639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.603738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.603768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.603881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.603920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.604113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.604142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.604261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.604293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.604392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.604424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.604606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.604638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.604747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.604779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.605046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.605076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.605345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.605380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.605626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.605657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.605840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.605873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.606050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.606089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.606228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.606260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.606372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.606404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.606574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.606611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.606873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.606904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.607012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.607042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.607156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.607219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.607432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.607463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.607633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.607667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.607833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.607865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.608060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.608091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.608272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.608304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.608426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.608457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.608643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.608672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.608779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.608811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.608935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.608975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.609095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.609125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.609262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.609301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.609552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.609588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.609718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.609749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.609940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.609972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.576 qpair failed and we were unable to recover it. 00:27:56.576 [2024-12-10 04:14:55.610210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.576 [2024-12-10 04:14:55.610242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.610357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.610389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.610517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.610547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.610782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.610814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.610944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.610974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.611159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.611209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.611452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.611524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.611664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.611702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.611914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.611948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.612067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.612110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.612253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.612288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.612465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.612497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.612737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.612773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.612966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.613000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.613104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.613137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.613196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121c0f0 (9): Bad file descriptor 00:27:56.577 [2024-12-10 04:14:55.613426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.613497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.613766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.613804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.613997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.614033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.614163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.614201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.614433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.614464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.614639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.614669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.614857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.614887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.615053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.615089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.615206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.577 [2024-12-10 04:14:55.615237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.577 qpair failed and we were unable to recover it. 00:27:56.577 [2024-12-10 04:14:55.615362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.615393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.615510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.615540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.615728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.615758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.615934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.615964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.616154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.616198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.616382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.616412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.616511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.616540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.616709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.616740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.616942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.616973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.617076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.617106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.617411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.617442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.617559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.617590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.617783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.617813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.618015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.618045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.618226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.618257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.618371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.618403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.618570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.618600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.618806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.618837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.619019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.619051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.619200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.619232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.619353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.619384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.619616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.619645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.619881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.619913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.620084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.620118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.620237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.620268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.620376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.620419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.620532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.620561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.620687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.620717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.620821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.620850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.621033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.621064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.578 [2024-12-10 04:14:55.621234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.578 [2024-12-10 04:14:55.621270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.578 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.621506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.621535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.621653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.621685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.621859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.621890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.622016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.622047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.622188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.622234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.622411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.622445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.622650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.622693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.622823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.622854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.623059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.623092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.623279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.623315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.623496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.623533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.623644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.623676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.623852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.623887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.624026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.624059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.624255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.624290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.624479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.624519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.624702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.624735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.624856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.624889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.625156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.625203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.625335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.625370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.625544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.625579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.625683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.625716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.625848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.625880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.626086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.626118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.626248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.626267] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:27:56.579 [2024-12-10 04:14:55.626283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 [2024-12-10 04:14:55.626308] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.626469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.626502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.626628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.626660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.626856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.626887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.627070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.627100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.627229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.627262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.627448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.579 [2024-12-10 04:14:55.627482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.579 qpair failed and we were unable to recover it. 00:27:56.579 [2024-12-10 04:14:55.627603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.627637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.627751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.627783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.628023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.628056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.628206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.628241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.628367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.628401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.628588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.628621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.628728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.628760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.628937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.628973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.629184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.629221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.629413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.629446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.629627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.629660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.629859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.629891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.630022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.630056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.630248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.630284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.630410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.630446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.630586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.630619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.630884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.630928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.631046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.631080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.631216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.631252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.631370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.631403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.631624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.631658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.631944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.631979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.632180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.632216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.632333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.632367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.632560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.632596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.632860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.632894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.633035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.633068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.633270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.633309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.633446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.633480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.633659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.633695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.633827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.633860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.633977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.634010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.634301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.634340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.634462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.634496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.580 [2024-12-10 04:14:55.634739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.580 [2024-12-10 04:14:55.634777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.580 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.634906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.634947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.635058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.635091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.635217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.635252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.635383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.635416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.635522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.635555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.635672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.635705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.635966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.636001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.636142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.636185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.636363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.636396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.636516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.636552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.636667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.636700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.636882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.636915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.637045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.637079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.637266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.637303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.637500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.637548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.637788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.637823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.638022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.638055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.638245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.638280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.638391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.638424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.638603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.638638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.638769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.638803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.638935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.638972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.639133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.639207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.639410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.639453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.639594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.639627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.639874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.639908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.640084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.640118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.640249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.640284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.640478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.640511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.640632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.640665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.640884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.640917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.641040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.641073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.641332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.641372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.641502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.641536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.641651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.641685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.641819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.641862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.642063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.642097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.642219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.642255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.642524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.581 [2024-12-10 04:14:55.642557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.581 qpair failed and we were unable to recover it. 00:27:56.581 [2024-12-10 04:14:55.642799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.642834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.642959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.642993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.643191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.643226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.643468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.643504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.643741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.643775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.643956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.643991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.644101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.644134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.644352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.644394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.644579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.644611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.644724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.644758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.644963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.645006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.645133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.645178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.645378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.645413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.645594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.645626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.645742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.645775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.645889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.645921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.646115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.646154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.646408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.646446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.646557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.646589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.646780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.646815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.647006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.647039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.647148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.647197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.647444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.647482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.647597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.647638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.647772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.647805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.647944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.647987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.648191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.648225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.648342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.648375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.648478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.648510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.648682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.648716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.648826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.648860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.648988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.649021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.649136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.582 [2024-12-10 04:14:55.649197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.582 qpair failed and we were unable to recover it. 00:27:56.582 [2024-12-10 04:14:55.649373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.649406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.649589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.649623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.649862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.649895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.650038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.650072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.650264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.650300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.650427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.650460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.650587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.650621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.650726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.650760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.650959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.650993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.651241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.651275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.651542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.651577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.651703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.651735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.651858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.651892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.652077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.652111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.652249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.652283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.652484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.652517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.652637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.652672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.652911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.652945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.653132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.653190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.653311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.653347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.653466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.653500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.653741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.653776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.653900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.653933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.654210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.654247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.654426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.654461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.654634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.654668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.654783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.654817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.654939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.654972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.655155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.655198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.655317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.655351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.655481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.655515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.655709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.655751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.655896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.655930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.656053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.656086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.656208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.656243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.656483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.656517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.656623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.656657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.656852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.656887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.657061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.657095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.583 qpair failed and we were unable to recover it. 00:27:56.583 [2024-12-10 04:14:55.657286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.583 [2024-12-10 04:14:55.657321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.657522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.657558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.657683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.657719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.657921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.657956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.658131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.658164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.658297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.658340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.658535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.658570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.658684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.658719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.658822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.658856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.659096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.659130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.659346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.659380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.659555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.659589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.659767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.659801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.659920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.659953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.660126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.660159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.660362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.660395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.660508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.660542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.660660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.660693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.660870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.660903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.661105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.661139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.661276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.661311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.661515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.661551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.661758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.661792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.661897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.661930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.662061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.662095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.662270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.662306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.662544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.662578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.662778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.662811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.663001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.663034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.663152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.663197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.663320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.663354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.663547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.663580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.663778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.663821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.663940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.663974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.664180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.664217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.664480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.664515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.664705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.664739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.664991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.665024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.665179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.584 [2024-12-10 04:14:55.665214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.584 qpair failed and we were unable to recover it. 00:27:56.584 [2024-12-10 04:14:55.665344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.665379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.665498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.665532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.665643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.665677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.665850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.665883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.666018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.666051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.666187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.666224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.666404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.666446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.666559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.666593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.666730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.666764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.666943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.666977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.667181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.667221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.667427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.667461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.667662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.667698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.667875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.667909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.668102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.668134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.668330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.668366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.668488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.668523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.668706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.668738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.668919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.668951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.669147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.669194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.669377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.669409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.669533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.669566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.669745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.669778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.669955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.669987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.670114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.670147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.670498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.670533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.670743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.670781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.671002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.671035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.671263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.671301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.671488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.671522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.671734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.671769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.671900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.671933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.672119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.672156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.672368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.672406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.672597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.672630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.672747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.672779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.672956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.672989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.673155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.673205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.673327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.673360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.585 [2024-12-10 04:14:55.673548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.585 [2024-12-10 04:14:55.673582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.585 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.673842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.673875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.674011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.674045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.674318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.674352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.674487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.674520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.674656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.674690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.674811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.674847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.674963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.674998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.675217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.675260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.675526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.675561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.675752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.675786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.675909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.675943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.676213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.676248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.676378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.676411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.676555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.676589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.676776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.676809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.676987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.677021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.677259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.677295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.677493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.677526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.677721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.677755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.677933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.677966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.678210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.678251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.678510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.678546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.678666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.678698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.678831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.678863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.679063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.679097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.679277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.679312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.679431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.679462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.679595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.679628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.679819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.679852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.679986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.680019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.680146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.680186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.680361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.680395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.680568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.680601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.680726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.680758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.680871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.680903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.681078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.681112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.681291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.681327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.681507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.681540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.681797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.586 [2024-12-10 04:14:55.681829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.586 qpair failed and we were unable to recover it. 00:27:56.586 [2024-12-10 04:14:55.681955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.681988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.682110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.682143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.682268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.682301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.682505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.682537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.682826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.682859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.682972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.683004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.683202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.683238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.683410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.683443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.683602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.683672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.683950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.683988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.684117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.684158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.684299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.684334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.684441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.684475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.684617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.684651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.684772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.684808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.684949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.684982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.685179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.685216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.685460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.685493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.685735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.685769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.685883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.685924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.686202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.686239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.686376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.686409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.686619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.686653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.686830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.686865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.687053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.687086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.687348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.687383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.687510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.687556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.687818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.687852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.688058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.688092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.688282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.688316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.688443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.688475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.688728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.688767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.688944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.688981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.689181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.689214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.587 [2024-12-10 04:14:55.689407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.587 [2024-12-10 04:14:55.689440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.587 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.689570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.689611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.689817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.689851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.689969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.690002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.690131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.690175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.690377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.690421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.690535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.690567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.690714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.690749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.690941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.690975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.691076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.691110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.691326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.691360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.691466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.691500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.691769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.691803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.692013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.692049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.692189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.692224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.692421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.692457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.692648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.692680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.692812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.692845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.693101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.693136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.693262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.693302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.693493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.693526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.693642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.693673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.693788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.693822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.694061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.694104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.694309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.694350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.694479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.694517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.694710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.694743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.694866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.694900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.695111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.695155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.695382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.695417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.695545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.695578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.695705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.695745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.695987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.696022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.696150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.696193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.696380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.696413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.696589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.696623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.696817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.696851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.697044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.697078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.697282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.697318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.697445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.588 [2024-12-10 04:14:55.697477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.588 qpair failed and we were unable to recover it. 00:27:56.588 [2024-12-10 04:14:55.697585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.697620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.697738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.697770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.697952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.697986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.698194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.698229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.698346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.698379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.698568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.698606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.698785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.698818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.698932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.698965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.699144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.699187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.699361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.699394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.699577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.699611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.699735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.699767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.700028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.700060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.700237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.700272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.700511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.700543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.700662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.700701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.700828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.700862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.700972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.701006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.701218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.701256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.701436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.701470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.701589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.701622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.701768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.701801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.701915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.701947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.702137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.702192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.702388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.702421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.702541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.702575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.702824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.702856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.703036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.703069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.703201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.703236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.703509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.703543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.703662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.703694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.703867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.703899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.704099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.704133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.704270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.704312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.704505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.704540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.704743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.704776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.704964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.704998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.705236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.705271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.705405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.705440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.589 [2024-12-10 04:14:55.705622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.589 [2024-12-10 04:14:55.705659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.589 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.705861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.705896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.706090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.706124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.706326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.706370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.706492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.706526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.706662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.706697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.706868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.706901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.707031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.707064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.707191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.707227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.707402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.707435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.707621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.707655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.707842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.707876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.708036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.708070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.708184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.708218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.708407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.708440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.708705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.708719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.590 [2024-12-10 04:14:55.708739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.708855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.708895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.709093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.709127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.709393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.709429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.709643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.709677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.709810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.709844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.709998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.710032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.710205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.710241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.710371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.710404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.710585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.710618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.710852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.710886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.711073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.711118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.711262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.711299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.711426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.711459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.711673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.711706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.711889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.711922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.712153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.712202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.712396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.712429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.712648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.712680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.712803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.712835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.713009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.713041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.713189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.713224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.713400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.713433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.713616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.590 [2024-12-10 04:14:55.713648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.590 qpair failed and we were unable to recover it. 00:27:56.590 [2024-12-10 04:14:55.713760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.713793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.713918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.713954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.714081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.714114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.714389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.714424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.714619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.714660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.714814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.714853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.715095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.715131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.715246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.715282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.715521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.715555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.715751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.715784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.715907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.715941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.716121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.716157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.716352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.716388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.716596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.716632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.716833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.716868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.716990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.717025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.717146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.717196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.717324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.717366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.717482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.717516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.717694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.717729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.717848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.717883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.718099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.718145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.718397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.718438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.718613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.718648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.718786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.718821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.718995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.719030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.719206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.719243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.719416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.719449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.719711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.719746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.719864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.719897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.720077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.720110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.720326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.720371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.720561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.720594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.720728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.720765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.591 [2024-12-10 04:14:55.720889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.591 [2024-12-10 04:14:55.720924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.591 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.721203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.721240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.721355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.721390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.721581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.721616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.721802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.721834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.722128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.722162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.722369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.722407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.722605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.722645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.722840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.722871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.723071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.723106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.723405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.723447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.723581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.723617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.723831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.723868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.724009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.724041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.724266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.724302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.724511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.724544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.724666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.724699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.724859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.724894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.725094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.725131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.725320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.725362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.725551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.725584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.725773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.725805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.725940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.725976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.726088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.726133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.726361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.726396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.726595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.726629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.726817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.726850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.726969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.727003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.727188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.727223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.727406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.727446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.727690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.727728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.727926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.727961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.728067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.728099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.728237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.728272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.728572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.728607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.728731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.728766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.728882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.728915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.729060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.729100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.729292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.729325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.729511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.729545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.592 qpair failed and we were unable to recover it. 00:27:56.592 [2024-12-10 04:14:55.729755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.592 [2024-12-10 04:14:55.729795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.730079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.730116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.730271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.730306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.730488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.730523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.730691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.730724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.731044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.731078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.731279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.731315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.731509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.731545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.731729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.731765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.732012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.732046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.732149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.732193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.732465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.732500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.732712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.732747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.732878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.732914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.733050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.733085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.733207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.733243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.733378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.733412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.733600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.733633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.733812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.733845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.733973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.734008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.734132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.734179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.734367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.734400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.734612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.734653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.734764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.734798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.734932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.734967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.735099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.735133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.735279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.735319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.735442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.735475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.735691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.735724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.735898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.735931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.736106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.736139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.736285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.736320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.736440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.736473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.736673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.736706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.736828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.736863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.736972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.737005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.737242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.737276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.737489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.737521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.737708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.737741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.593 qpair failed and we were unable to recover it. 00:27:56.593 [2024-12-10 04:14:55.737858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.593 [2024-12-10 04:14:55.737890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.738011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.738043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.738227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.738261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.738384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.738416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.738524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.738558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.738747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.738779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.738965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.738997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.739138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.739182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.739386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.739419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.739630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.739663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.739872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.739906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.740035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.740068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.740313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.740354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.740547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.740580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.740750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.740784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.741021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.741054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.741189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.741223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.741413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.741448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.741698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.741730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.741917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.741951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.742147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.742188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.742323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.742355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.742547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.742579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.742749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.742782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.742953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.742986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.743129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.743162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.743367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.743400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.743643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.743677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.743795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.743827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.744001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.744036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.744274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.744308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.744429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.744461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.744654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.744688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.744890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.594 [2024-12-10 04:14:55.744924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.594 qpair failed and we were unable to recover it. 00:27:56.594 [2024-12-10 04:14:55.745129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.745162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.745357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.745391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.745511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.745544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.745780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.745812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.745949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.745981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.746156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.746205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.746393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.746426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.746680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.746714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.746884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.746917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.747153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.747200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.747377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.747412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.747594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.747627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.747816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.747855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.748136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.748181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.748296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.748329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.748503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.748536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.748665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.748699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.748813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.748846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.748965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.749003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.749120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.749153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.749301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.749336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.749527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.749561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.749681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.749713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.749906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.749940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.750059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.750093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.750223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.750258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.750436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.750468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.750643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.750677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.750917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.750950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.751211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.751248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.751484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.751517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.751696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.751730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.751929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.751964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.752081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.595 [2024-12-10 04:14:55.752113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.595 qpair failed and we were unable to recover it. 00:27:56.595 [2024-12-10 04:14:55.752319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.752352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.752487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.752521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.752726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.752761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.753041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.753077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.753321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.753357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.753548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.753584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.753690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.753723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.753897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.753931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.754053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.754088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.754280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.754316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.754429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.754461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b9[2024-12-10 04:14:55.754459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.596 0 with addr=10.0.0.2, port=4420 00:27:56.596 [2024-12-10 04:14:55.754490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.754500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.596 [2024-12-10 04:14:55.754507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.596 [2024-12-10 04:14:55.754512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.596 [2024-12-10 04:14:55.754672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.754704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.754893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.754924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.755062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.755097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.755222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.755255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.755375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.755409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.755520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.755552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.755668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.755701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.755879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.755912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.756090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.756123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.596 [2024-12-10 04:14:55.756039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.756164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:56.596 [2024-12-10 04:14:55.756202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:56.596 [2024-12-10 04:14:55.756268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.756315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.756201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:56.596 [2024-12-10 04:14:55.756450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.756482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.756602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.756634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.756781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.756816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.756936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.756970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.596 [2024-12-10 04:14:55.757095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.596 [2024-12-10 04:14:55.757130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.596 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.757356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.757392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.757592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.757625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.757760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.757795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.757918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.757952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.758153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.758200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.758426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.758461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.758580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.758614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.758810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.758845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.759055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.759089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.759276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.759314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.759516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.759551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.759734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.759768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.759900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.759934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.760121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.760154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.760433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.760468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.760698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.760731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.760869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.760904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.761197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.761233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.761412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.761446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.761618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.761652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.761914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.761948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.762199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.762235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.762375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.762415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.762551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.762585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.762692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.762725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.762842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.762875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.763012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.763046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.763158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.763206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.763348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.763382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.763497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.763531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.763634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.763666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.597 [2024-12-10 04:14:55.763859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.597 [2024-12-10 04:14:55.763893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.597 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.764030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.764064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.764210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.764245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.764366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.764401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.764583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.764616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.764749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.764782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.764976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.765010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.765251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.765286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.765480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.765515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.765689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.765723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.765987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.766021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.766153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.766197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.766388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.766421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.766597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.766631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.766835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.766869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.766988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.767022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.767158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.767202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.767413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.767447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.767568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.767608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.767738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.767772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.767979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.768015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.768132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.768165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.768298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.768332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.768447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.768481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.768675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.768709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.768894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.768928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.769053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.769086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.769270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.769305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.769771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.769811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.770003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.770037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.598 [2024-12-10 04:14:55.770232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.598 [2024-12-10 04:14:55.770268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.598 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.770384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.770418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.770559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.770594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.770743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.770776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.770894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.770927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.771052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.771086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.771207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.771243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.771360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.771392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.771577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.771611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.771822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.771855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.772047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.772080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.772197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.772232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.772365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.772399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.772515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.772549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.772731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.772765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.772880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.772922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.773038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.773071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.773190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.773225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.773336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.773371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.773475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.773510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.773697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.773731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.773846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.773879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.773986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.774020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.774147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.774195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.774371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.774404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.774525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.774561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.774674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.774708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.774823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.774856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.774990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.775025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.775141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.775186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.775307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.775341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.775519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.775553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.775669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.775704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.775808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.775841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.775954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.775989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.776193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.776228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.599 qpair failed and we were unable to recover it. 00:27:56.599 [2024-12-10 04:14:55.776346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.599 [2024-12-10 04:14:55.776381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.776486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.776520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.776703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.776737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.776911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.776946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.777190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.777228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.777351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.777384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.777498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.777532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.777663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.777698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.777873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.777908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.778019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.778052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.778158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.778199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.778329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.778362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.778484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.778517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.778713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.778749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.778867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.778901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.779156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.779220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.779337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.779371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.779564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.779598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.779712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.779746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.779922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.779956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.780180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.780235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.780499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.780553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.780685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.780719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.780844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.780879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.780989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.781024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.781214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.781251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.781442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.781477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.781596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.781629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.781750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.781785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.781913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.781946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.782059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.782093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.782284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.782320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.782444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.782478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.782664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.782707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.782824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.782857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.783084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.783117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.783265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.600 [2024-12-10 04:14:55.783299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.600 qpair failed and we were unable to recover it. 00:27:56.600 [2024-12-10 04:14:55.783406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.783441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.783552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.783584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.783702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.783736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.783856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.783889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.784067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.784101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.784238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.784272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.784386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.784419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.784613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.784647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.784770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.784805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.784982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.785015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.785194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.785229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.785343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.785377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.785552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.785586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.785706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.785739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.785931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.785964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.786136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.786179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.786305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.786338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.786454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.786487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.786600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.786635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.786766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.786800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.786922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.786955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.787074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.787108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.787298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.787333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.787521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.787563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.787694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.787727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.787842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.787875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.788052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.788084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.788201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.788236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.788354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.788386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.788497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.788531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-12-10 04:14:55.788653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.601 [2024-12-10 04:14:55.788685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.788815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.788847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.789026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.789060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.789182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.789215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.789461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.789496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.789599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.789633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.789817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.789851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.789967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.790002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.790122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.790155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.790347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.790381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.790502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.790537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.790645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.790678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.790805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.790839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.790955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.790987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.791103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.791138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.791280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.791340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.791542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.791582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.791699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.791737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.791862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.791900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.792095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.792128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.792259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.792296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.792419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.792453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.792564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.792597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.792718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.792750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.792859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.792891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.793008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.793041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.793233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.793268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.793377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.793410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.793532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.793565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.793692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.793724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.793841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.793875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.793978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.794011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.794118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.794151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.794292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.794331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.794447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.794480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.794585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.794618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-12-10 04:14:55.794730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.602 [2024-12-10 04:14:55.794761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.794876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.794908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.795024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.795057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.795208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.795242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.795418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.795449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.795630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.795664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.795777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.795810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.795920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.795955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.796073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.796106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.796234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.796269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.796382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.796416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.796542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.796576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.796695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.796728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.796840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.796873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.796985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.797020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.797149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.797196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.797332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.797365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.797489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.797521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.797631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.797663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.797774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.797807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.797979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.798010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.798118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.798150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.798337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.798371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.798489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.798520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.798626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.798659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.798783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.798816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.798995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.799027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.799209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.799244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.799368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.799401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.799518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.799551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.799663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.799697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.799814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.799848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.800041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.800079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.800199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.800237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.800483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.800517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.800700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.800734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.800855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.800887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.800993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.801027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-12-10 04:14:55.801163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.603 [2024-12-10 04:14:55.801207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.801394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.801426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.801547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.801579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.801708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.801742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.801869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.801902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.802046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.802079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.802252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.802288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.802407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.802440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.802564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.802597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.802705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.802738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.802907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.802940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.803115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.803147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.803337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.803372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.803566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.803607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.803715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.803748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.803868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.803900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.804004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.804037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.804148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.804195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.804331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.804366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.804483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.804516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.804624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.804657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.804843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.804877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.804986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.805019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.805204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.805238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.805425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.805459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.805569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.805603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.805805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.805840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.805952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.805986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.806106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.806139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.806272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.806306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.806407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.806440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.806620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.806653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.806784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.806818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.806948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.806980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.807089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.604 [2024-12-10 04:14:55.807122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.604 qpair failed and we were unable to recover it. 00:27:56.604 [2024-12-10 04:14:55.807248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.807283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.807455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.807491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.807618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.807653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.807778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.807811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.807935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.807968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.808088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.808130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.808346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.808398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.808603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.808638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.808748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.808781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.808889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.808922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.809061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.809096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.809217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.809252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.809473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.809509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.809649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.809681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.809802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.809837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.809952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.809985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.810102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.810135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.810273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.810314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.810446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.810488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.810600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.810635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.810745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.810779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.811023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.811058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.811200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.811238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.811353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.811390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.811496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.811529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.811641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.811674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.811853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.811886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.812006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.812040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.812151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.812197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.812317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.812351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.812527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.812562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.812768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.812801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.812919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.812953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.813081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.813114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.813363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.813400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.813511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.813545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.813676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.813710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.605 qpair failed and we were unable to recover it. 00:27:56.605 [2024-12-10 04:14:55.813834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.605 [2024-12-10 04:14:55.813869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.813992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.814026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.814203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.814237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.814413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.814445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.814564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.814596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.814706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.814739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.814911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.814944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.815052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.815084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.815203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.815248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.815355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.815389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.815497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.815531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.815650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.815684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.815864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.815897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.816041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.816074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.816208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.816242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.816367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.816402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.816513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.816546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.816658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.816691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.816868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.816901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.817024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.817057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.817228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.817263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.817435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.817489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.817626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.817658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.817831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.817865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.817999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.818032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.818148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.818193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.818398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.818431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.818673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.818707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.818814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.818847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.818948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.818981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.819190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.819226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.819418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.819450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.819566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.606 [2024-12-10 04:14:55.819598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.606 qpair failed and we were unable to recover it. 00:27:56.606 [2024-12-10 04:14:55.819705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.607 [2024-12-10 04:14:55.819740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.607 qpair failed and we were unable to recover it. 00:27:56.607 [2024-12-10 04:14:55.819854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.607 [2024-12-10 04:14:55.819886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.607 qpair failed and we were unable to recover it. 00:27:56.607 [2024-12-10 04:14:55.820092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.607 [2024-12-10 04:14:55.820126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.607 qpair failed and we were unable to recover it. 00:27:56.607 [2024-12-10 04:14:55.820315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.607 [2024-12-10 04:14:55.820350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.607 qpair failed and we were unable to recover it. 00:27:56.607 [2024-12-10 04:14:55.820474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.607 [2024-12-10 04:14:55.820508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.607 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.820627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.820660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.820773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.820807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.820998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.821033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.821152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.821201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.821327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.821361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.821535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.821569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.821682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.821715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.821819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.821854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.822039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.822073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.822189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.822223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.822422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.822470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.822666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.822701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.822805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.822840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.822967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.823003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.823273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.823310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.823420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.823453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.823569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.823605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.823739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.823772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.823892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.823926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.824098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.824131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.824269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.824313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.824434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.824473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.824653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.824686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.824808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.824841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.825037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.825071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.825310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.825348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.825540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.825574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.825763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.825796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.825912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.825945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.826062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.826095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.880 [2024-12-10 04:14:55.826273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.880 [2024-12-10 04:14:55.826308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.880 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.826490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.826523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.826657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.826691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.826867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.826900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.827142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.827185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.827291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.827324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.827451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.827484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.827644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.827678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.827863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.827896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.828025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.828059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.828185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.828219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.828430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.828463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.828652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.828685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.828823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.828856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.828985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.829018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.829192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.829228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.829382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.829415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.829596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.829630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.829743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.829781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.829901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.829938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.830086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.830132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.830273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.830310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.830442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.830475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.830586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.830619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.830747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.830780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.830901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.830934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.831056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.831089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.831214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.831249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.831439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.831472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.831588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.831623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.831743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.831776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.831967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.832000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.832117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.832151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.832396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.832430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.832548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.832581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.832714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.832747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.832922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.832957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.833063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.833096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.833277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.833311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.881 [2024-12-10 04:14:55.833443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.881 [2024-12-10 04:14:55.833476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.881 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.833606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.833640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.833809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.833843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.833969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.834002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.834119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.834152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.834311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.834345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.834472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.834505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.834712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.834745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.834889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.834923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.835036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.835070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.835244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.835279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.835422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.835454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.835573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.835606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.835814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.835848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.836026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.836060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.836204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.836237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.836366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.836400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.836585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.836618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.836792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.836825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.836943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.836977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.837153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.837196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.837436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.837475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.837586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.837620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.837749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.837783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.837973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.838006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.838125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.838159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.838276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.838309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.838493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.838525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.838791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.838824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.838996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.839028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.839148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.839193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.839366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.839398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.839540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.839572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.839699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.839732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.839835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.839867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.839979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.840011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.840127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.840161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.840350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.840382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.840582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.840615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.882 [2024-12-10 04:14:55.840739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.882 [2024-12-10 04:14:55.840771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.882 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.840891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.840924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.841034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.841067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.841297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.841331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.841451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.841485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.841610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.841643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.841755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.841787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.841908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.841941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.842062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.842095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.842235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.842275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.842463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.842503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.842624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.842668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.842800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.842835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.842950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.842984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.843101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.843135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.843349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.843386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.843496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.843530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.843648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.843680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.843796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.843829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.843936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.843969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.844082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.844115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.844241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.844276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.844385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.844425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.844607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.844641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.844775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.844808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.844911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.844945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.845047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.845079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.845250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.845284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.845461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.845496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.845598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.845631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.845735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.845769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.845956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.845990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.846121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.846155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.846375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.846408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.846527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.846560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.846825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.846859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.846991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.847025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.847131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.847175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.847305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.847340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.883 [2024-12-10 04:14:55.847619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.883 [2024-12-10 04:14:55.847653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.883 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.847776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.847809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.847937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.847971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.848080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.848113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.848307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.848342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.848637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.848673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.848854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.848888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.849026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.849059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.849214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.849249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.849368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.849400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.849530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.849569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.849758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.849791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.849908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.849944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.850066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.850099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.850236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.850270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.850388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.850421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.884 [2024-12-10 04:14:55.850623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.850657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:56.884 [2024-12-10 04:14:55.850844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.850879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.851056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.851089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:56.884 [2024-12-10 04:14:55.851202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.851236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.851417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.851449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:56.884 [2024-12-10 04:14:55.851565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.851604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.884 [2024-12-10 04:14:55.851790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.851826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.851953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.851985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.852103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.852136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.852256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.852293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.852422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.852457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.852574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.852608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.852722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.852755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.852865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.852900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.853074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.853106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.853235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.884 [2024-12-10 04:14:55.853269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.884 qpair failed and we were unable to recover it. 00:27:56.884 [2024-12-10 04:14:55.853390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.853423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.853597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.853631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.853829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.853862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.854046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.854079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.854201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.854236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.854438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.854472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.854601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.854637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.854767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.854801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.854915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.854948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.855123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.855155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.855281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.855314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.855438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.855471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.855667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.855701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.855877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.855913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.856041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.856074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.856262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.856296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.856427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.856467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.856578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.856616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.856754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.856788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.856909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.856950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.857073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.857107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.857263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.857301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.857502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.857539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.857662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.857696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.857812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.857846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.857966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.858001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.858122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.858156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.858292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.858329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.858439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.858474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.858600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.858632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.858815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.858849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.858982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.859015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.859216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.859251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.859381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.859417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.859555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.859591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.859709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.859742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.859854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.859887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.860006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.885 [2024-12-10 04:14:55.860040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.885 qpair failed and we were unable to recover it. 00:27:56.885 [2024-12-10 04:14:55.860146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.860190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.860303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.860338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.860459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.860494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.860618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.860652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.860787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.860819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.860934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.860968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.861090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.861124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.861246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.861280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.861398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.861432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.861615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.861650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.861773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.861806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.861917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.861951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.862072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.862108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.862246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.862299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.862459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.862492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.862616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.862648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.862752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.862785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.862971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.863005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.863115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.863158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.863288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.863322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.863503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.863536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.863667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.863701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.863880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.863914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.864030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.864063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.864189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.864223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.864348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.864382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.864504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.864538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.864673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.864707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.864905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.864942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.865048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.865082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.865193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.865229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.865375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.865409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.865616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.865650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.865780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.865814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.865937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.865972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.866083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.866118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.866270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.866322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.866454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.866490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.886 qpair failed and we were unable to recover it. 00:27:56.886 [2024-12-10 04:14:55.866598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.886 [2024-12-10 04:14:55.866631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.866740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.866773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.866882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.866916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.867051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.867084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.867198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.867233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.867360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.867393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.867507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.867541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.867732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.867766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.867900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.867933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.868069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.868103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.868231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.868266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.868375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.868408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.868531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.868564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.868735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.868769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.868946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.868981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.869107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.869140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.869269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.869303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.869408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.869441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.870386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.870432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.870571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.870604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.870725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.870766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.870893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.870927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.871051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.871086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.871203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.871239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.871363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.871397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.871577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.871613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.871730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.871763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.871877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.871910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.872046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.872079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.872198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.872233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.872408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.872441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.872548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.872582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.872692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.872726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.872904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.872938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.873132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.873176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.873298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.873331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.873451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.873485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.873600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.873633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.873808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.873841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.887 [2024-12-10 04:14:55.873960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.887 [2024-12-10 04:14:55.873993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.887 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.874100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.874133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.874279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.874313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.874428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.874461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.874636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.874669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.874787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.874820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.874927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.874960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.875072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.875105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.875236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.875272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.875393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.875427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.875565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.875598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.875721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.875755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.875954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.876016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.876188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.876226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.876405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.876441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.876566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.876597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.876733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.876765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.876885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.876918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.877040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.877073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.877251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.877288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.877471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.877506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.877616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.877664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.877801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.877833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.878019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.878051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.878181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.878215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.878322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.878353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.878460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.878493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.878619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.878652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.878771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.878803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.878914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.878947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.879069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.879102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.879233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.879267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.879379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.879411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.879544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.879575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.879681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.879714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.879832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.879864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.879990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.880024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.880163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.880206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.880314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.880346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.880523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.888 [2024-12-10 04:14:55.880557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.888 qpair failed and we were unable to recover it. 00:27:56.888 [2024-12-10 04:14:55.880667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.880699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.880887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.880919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.881045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.881077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.881203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.881237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.881409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.881441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.881561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.881594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.881715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.881747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.881853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.881884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.882012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.882060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e1a0 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.882204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.882259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.882392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.882427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.882535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.882569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.882703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.882738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.882851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.882888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.883018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.883054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.883178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.883215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.883342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.883376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.883487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.883522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.883632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.883666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.883851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.883885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.884010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.884047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.884180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.884225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.884489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.884530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.884665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.884706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.884828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.884861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.884986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.885019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.885132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.885180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.885297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.885331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.885449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.885485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.885662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.885698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.885822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.885856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.885978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.886012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.886143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.886191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.886302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.889 [2024-12-10 04:14:55.886334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.889 qpair failed and we were unable to recover it. 00:27:56.889 [2024-12-10 04:14:55.886453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.886485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.886619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.886659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.886840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.886878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.887007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.887039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.887154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.887202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.887329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.887363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.887543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.887576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.887696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.887729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.887852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.887885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.887994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.888030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.888142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.890 [2024-12-10 04:14:55.888190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.888376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.888414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.888535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.888569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:56.890 [2024-12-10 04:14:55.888692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.888734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.888851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.888885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.890 [2024-12-10 04:14:55.888999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.889034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.889151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.889197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.890 [2024-12-10 04:14:55.889329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.889365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.889488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.889521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.889632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.889669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.889786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.889820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.889959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.889993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.890121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.890154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.890283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.890317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.890438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.890470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.890594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.890635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.890761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.890799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.890921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.890961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.891082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.891116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.891252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.891286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.891427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.891461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.891651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.891685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.891793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.891825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.891936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.891970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.892096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.892127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.892252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.892287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.892392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.890 [2024-12-10 04:14:55.892431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.890 qpair failed and we were unable to recover it. 00:27:56.890 [2024-12-10 04:14:55.892611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.892643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.892756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.892790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.892910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.892944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.893067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.893102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.893221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.893257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.893384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.893416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.893526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.893560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.893675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.893707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.893822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.893859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.894035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.894069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.894193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.894231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.894354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.894386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.894502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.894535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.894654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.894684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.894786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.894815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.894998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.895037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.895165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.895208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.895366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.895400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.895509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.895541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.895715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.895748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.895874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.895907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.896018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.896052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.896159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.896212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.896366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.896399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.896518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.896552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.896680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.896713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.896822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.896854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.896964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.896998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.897105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.897145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.897266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.897301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.897405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.897440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.897565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.897599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.897708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.897741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.897863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.897896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.898011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.898043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.898153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.898199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.898320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.898353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.898456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.898489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.898619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.891 [2024-12-10 04:14:55.898653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.891 qpair failed and we were unable to recover it. 00:27:56.891 [2024-12-10 04:14:55.898762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.898794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.898905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.898938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.899043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.899078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.899214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.899254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.899439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.899469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.899585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.899616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.899741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.899773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.899900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.899930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.900033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.900064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.900175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.900207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.900307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.900339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.900435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.900466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.900568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.900599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.900700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.900730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.900905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.900935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.901039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.901071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d4000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.901221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.901277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.901420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.901455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.901568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.901601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.901772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.901806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.901936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.901969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.902081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.902113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.902236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.902270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.902376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.902408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.902527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.902560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.902670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.902702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.902814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.902845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.903143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.903184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.903307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.903339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.903454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.903493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.903606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.903637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.903763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.903796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.903901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.903932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.904048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.904081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.904199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.904235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.904408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.904441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.904548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.904581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.904698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.904732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.904840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.892 [2024-12-10 04:14:55.904872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.892 qpair failed and we were unable to recover it. 00:27:56.892 [2024-12-10 04:14:55.904982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.905014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.905126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.905158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.905277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.905310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.905420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.905452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.905565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.905596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.905706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.905740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.905861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.905893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.905992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.906026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.906211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.906247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.906430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.906462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.906575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.906609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.906723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.906755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.906859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.906892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.907008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.907040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.907148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.907191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.907305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.907337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.907508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.907541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.907713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.907751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.907867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.907901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.908005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.908037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.908276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.908312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.908422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.908454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.908575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.908609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.908725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.908758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.908863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.908896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.909081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.909114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.909246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.909280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.909389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.909422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.909529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.909561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.909674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.909705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.909826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.909859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.910038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.910072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.910190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.910225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.893 [2024-12-10 04:14:55.910396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.893 [2024-12-10 04:14:55.910430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.893 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.910553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.910586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.910758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.910791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.910897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.910930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.911041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.911072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.911211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.911246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.911422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.911455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.911560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.911592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.911700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.911732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.911843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.911874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.912049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.912080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.912258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.912294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.912419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.912453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.912578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.912611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.912735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.912768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.912873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.912906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.913077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.913109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.913272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.913309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.913418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.913450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.913560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.913592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.913797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.913830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.913934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.913965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.914110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.914143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.914270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.914305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.914416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.914453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.914641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.914671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.914776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.914806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.914970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.915000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.915187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.915217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.915321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.915350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.915462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.915491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.915609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.915638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.915734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.915764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.915862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.915892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.916002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.916032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.916134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.916164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.916355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.916386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.916549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.916581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.916693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.916722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.894 [2024-12-10 04:14:55.916841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.894 [2024-12-10 04:14:55.916872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.894 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.916982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.917011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.917117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.917147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.917272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.917304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.917404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.917433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.917614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.917645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.917822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.917852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.917976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.918006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.918110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.918141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.918328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.918360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.918468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.918497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.918595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.918624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.918731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.918760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.918867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.918899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.918999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.919028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.919141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.919182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.919285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.919316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.919416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.919447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.919552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.919581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.919677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.919710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.919896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.919928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.920057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.920087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.920208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.920241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.920349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.920378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.920483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.920514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.920615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.920649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.920837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.920866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.920981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.921012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.921110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.921141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.921253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.921283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.921400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.921428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.921531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.921562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.921664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.921693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.921945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.921975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.922186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.922217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.922327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.922357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.922468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.922501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.922614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.922644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.922756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.922787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.922909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.895 [2024-12-10 04:14:55.922938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.895 qpair failed and we were unable to recover it. 00:27:56.895 [2024-12-10 04:14:55.923049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.923080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.923210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.923241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.923359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.923389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.923567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.923599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.923695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.923725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.923906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.923935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.924180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.924213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 Malloc0 00:27:56.896 [2024-12-10 04:14:55.924325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.924355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.924521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.924549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.924672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.924700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.924794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.924822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.924914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.924941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.925038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.925065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.925164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.925201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:56.896 [2024-12-10 04:14:55.925370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.925398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.925559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.925588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.896 [2024-12-10 04:14:55.925692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.925720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.925829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.925857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.925968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.896 [2024-12-10 04:14:55.925996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.926181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.926209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.926307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.926334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.926429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.926456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.926569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.926596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.926719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.926753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.926859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.926886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.926990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.927018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.927140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.927175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.927340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.927368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.927467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.927495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.927609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.927637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.927739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.927767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.927860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.927888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.928071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.928099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.928266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.928295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.928407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.928435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.928551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.928579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.896 qpair failed and we were unable to recover it. 00:27:56.896 [2024-12-10 04:14:55.928675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.896 [2024-12-10 04:14:55.928702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.928806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.928834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.928928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.928956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.929047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.929075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.929175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.929203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.929295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.929323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.929432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.929459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.929581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.929609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.929720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.929748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.929849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.929876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.929976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.930004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.930115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.930143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.930263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.930291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.930392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.930419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.930623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.930664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.930814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.930849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.930962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.930996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.931105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.931138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.931321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.931354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.931456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.931490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.931613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.931644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.931766] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.897 [2024-12-10 04:14:55.931828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.931856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.932046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.932074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.932180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.932209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.932315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.932344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.932502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.932529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.932652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.932680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.932775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.932813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.932979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.933007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.933120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.933147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.933256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.933284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.933531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.933560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.933675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.933705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.933808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.933836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.934003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.934030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.934146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.934184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.934282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.934310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.934401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.934430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.897 qpair failed and we were unable to recover it. 00:27:56.897 [2024-12-10 04:14:55.934522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.897 [2024-12-10 04:14:55.934550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.934645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.934674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.934770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.934798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.934907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.934936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.935105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.935133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.935258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.935286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.935388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.935415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.935514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.935542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.935714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.935742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.935837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.935866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.936031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.936059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.936149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.936225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.936333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.936361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.936463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.936493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.936591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.936620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.936724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.936759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.936869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.936896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.936991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.937020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.937118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.937145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.898 [2024-12-10 04:14:55.937272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.937302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.937416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.937443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.937539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.937567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:56.898 [2024-12-10 04:14:55.937663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.937690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.937793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.937821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.937917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.937943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.898 [2024-12-10 04:14:55.938036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.938064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.938259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.898 [2024-12-10 04:14:55.938287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.938405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.938437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.938533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.938563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.938735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.938763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.938866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.938894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.938987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.939014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.939114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.939141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.898 [2024-12-10 04:14:55.939262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.898 [2024-12-10 04:14:55.939297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.898 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.939441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.939474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.939584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.939617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.939725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.939758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.939888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.939921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.940109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.940143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23e0000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.940261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.940291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.940454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.940482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.940602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.940630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.940741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.940770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.940959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.940986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.941106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.941133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.941233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.941261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.941385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.941413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.941512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.941540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.941635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.941662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.941890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.941917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.942159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.942197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.942294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.942322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.942416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.942444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.942550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.942579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.942676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.942703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.942807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.942835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.942940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.942967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.943075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.943104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.943263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.943291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.943388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.943417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.943527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.943555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.943661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.943689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.943782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.943810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.943913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.943940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.944034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.944062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.944210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.944238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.944331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.944359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.944526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.944559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.944660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.944686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.944782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.944811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.944903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.944930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.945025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.945053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.899 [2024-12-10 04:14:55.945183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.899 [2024-12-10 04:14:55.945212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.899 qpair failed and we were unable to recover it. 00:27:56.900 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.900 [2024-12-10 04:14:55.945390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.945418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.945517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.945544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:56.900 [2024-12-10 04:14:55.945651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.945679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.945778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.945807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.945903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.945932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.900 [2024-12-10 04:14:55.946094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.946122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.946233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.900 [2024-12-10 04:14:55.946267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.946370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.946398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.946500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.946529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.946625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.946655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.946783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.946812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.946925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.946954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.947120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.947149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.947259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.947287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.947450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.947478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.947651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.947679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.947770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.947798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.947896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.947924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.948095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.948124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.948258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.948294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.948391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.948419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.948518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.948547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.948661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.948688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.948801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.948828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.948994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.949021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.949114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.949142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.949263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.949292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.949410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.949438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.949540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.949568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.949669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.949697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.949805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.949833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.949939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.949967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.950067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.950095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.950223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.950253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.950393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.950420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.950523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.950553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.950651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.900 [2024-12-10 04:14:55.950678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.900 qpair failed and we were unable to recover it. 00:27:56.900 [2024-12-10 04:14:55.950774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.950802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.950903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.950931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.951050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.951078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.951187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.951216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.951380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.951407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.951503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.951531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.951635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.951663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.951866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.951894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.952000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.952027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.952129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.952158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.952272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.952300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.952506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.952533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.952627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.952656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.952832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.952860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.952965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.952993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.953230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.953261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.901 [2024-12-10 04:14:55.953436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.953464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.953573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.953600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.901 [2024-12-10 04:14:55.953712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.953741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.953905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.953934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.901 [2024-12-10 04:14:55.954093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.954122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.954239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.954269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.901 [2024-12-10 04:14:55.954363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.954390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.954490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.954518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.954676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.954705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.954881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.954909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.955068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.955096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.955219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.955249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.955455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.955481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.955661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.955687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.955781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.955807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.955910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.955936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.956029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.956055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.956154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.956187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.956310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.956336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.956447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.956474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.956571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.901 [2024-12-10 04:14:55.956597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f23d8000b90 with addr=10.0.0.2, port=4420 00:27:56.901 qpair failed and we were unable to recover it. 00:27:56.901 [2024-12-10 04:14:55.956748] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.901 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.902 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:56.902 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.902 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.902 [2024-12-10 04:14:55.962415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.902 [2024-12-10 04:14:55.962522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.902 [2024-12-10 04:14:55.962560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.902 [2024-12-10 04:14:55.962577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.902 [2024-12-10 04:14:55.962593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.902 [2024-12-10 04:14:55.962635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.902 04:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 216054 00:27:56.902 [2024-12-10 04:14:55.972388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.902 [2024-12-10 04:14:55.972466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.902 [2024-12-10 04:14:55.972490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.902 [2024-12-10 04:14:55.972503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.902 [2024-12-10 04:14:55.972514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.902 [2024-12-10 04:14:55.972541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-12-10 04:14:55.982368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.902 [2024-12-10 04:14:55.982449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.902 [2024-12-10 04:14:55.982471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.902 [2024-12-10 04:14:55.982480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.902 [2024-12-10 04:14:55.982487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.902 [2024-12-10 04:14:55.982505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-12-10 04:14:55.992323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.902 [2024-12-10 04:14:55.992383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.902 [2024-12-10 04:14:55.992396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.902 [2024-12-10 04:14:55.992403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.902 [2024-12-10 04:14:55.992410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.902 [2024-12-10 04:14:55.992425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-12-10 04:14:56.002261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.902 [2024-12-10 04:14:56.002321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.902 [2024-12-10 04:14:56.002335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.902 [2024-12-10 04:14:56.002342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.902 [2024-12-10 04:14:56.002348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.902 [2024-12-10 04:14:56.002363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-12-10 04:14:56.012281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.902 [2024-12-10 04:14:56.012333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.902 [2024-12-10 04:14:56.012347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.902 [2024-12-10 04:14:56.012354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.902 [2024-12-10 04:14:56.012361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.902 [2024-12-10 04:14:56.012376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-12-10 04:14:56.022389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.902 [2024-12-10 04:14:56.022444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.902 [2024-12-10 04:14:56.022458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.902 [2024-12-10 04:14:56.022468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.902 [2024-12-10 04:14:56.022475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.902 [2024-12-10 04:14:56.022490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-12-10 04:14:56.032399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.902 [2024-12-10 04:14:56.032458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.902 [2024-12-10 04:14:56.032472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.902 [2024-12-10 04:14:56.032480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.902 [2024-12-10 04:14:56.032486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.902 [2024-12-10 04:14:56.032501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-12-10 04:14:56.042475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.902 [2024-12-10 04:14:56.042538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.902 [2024-12-10 04:14:56.042551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.902 [2024-12-10 04:14:56.042559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.902 [2024-12-10 04:14:56.042565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.902 [2024-12-10 04:14:56.042580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-12-10 04:14:56.052468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.902 [2024-12-10 04:14:56.052520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.902 [2024-12-10 04:14:56.052533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.902 [2024-12-10 04:14:56.052540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.902 [2024-12-10 04:14:56.052546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.902 [2024-12-10 04:14:56.052561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.902 qpair failed and we were unable to recover it. 00:27:56.902 [2024-12-10 04:14:56.062450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.902 [2024-12-10 04:14:56.062507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.902 [2024-12-10 04:14:56.062521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.902 [2024-12-10 04:14:56.062528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.902 [2024-12-10 04:14:56.062535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.902 [2024-12-10 04:14:56.062549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-12-10 04:14:56.072544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.903 [2024-12-10 04:14:56.072605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.903 [2024-12-10 04:14:56.072619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.903 [2024-12-10 04:14:56.072626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.903 [2024-12-10 04:14:56.072632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.903 [2024-12-10 04:14:56.072646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-12-10 04:14:56.082563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.903 [2024-12-10 04:14:56.082633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.903 [2024-12-10 04:14:56.082647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.903 [2024-12-10 04:14:56.082654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.903 [2024-12-10 04:14:56.082660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.903 [2024-12-10 04:14:56.082674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-12-10 04:14:56.092583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.903 [2024-12-10 04:14:56.092636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.903 [2024-12-10 04:14:56.092649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.903 [2024-12-10 04:14:56.092656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.903 [2024-12-10 04:14:56.092662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.903 [2024-12-10 04:14:56.092677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-12-10 04:14:56.102609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.903 [2024-12-10 04:14:56.102663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.903 [2024-12-10 04:14:56.102676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.903 [2024-12-10 04:14:56.102683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.903 [2024-12-10 04:14:56.102690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.903 [2024-12-10 04:14:56.102705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-12-10 04:14:56.112643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.903 [2024-12-10 04:14:56.112703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.903 [2024-12-10 04:14:56.112717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.903 [2024-12-10 04:14:56.112724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.903 [2024-12-10 04:14:56.112730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.903 [2024-12-10 04:14:56.112745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-12-10 04:14:56.122660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.903 [2024-12-10 04:14:56.122713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.903 [2024-12-10 04:14:56.122728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.903 [2024-12-10 04:14:56.122735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.903 [2024-12-10 04:14:56.122741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.903 [2024-12-10 04:14:56.122755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-12-10 04:14:56.132702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.903 [2024-12-10 04:14:56.132770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.903 [2024-12-10 04:14:56.132784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.903 [2024-12-10 04:14:56.132791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.903 [2024-12-10 04:14:56.132797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.903 [2024-12-10 04:14:56.132812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.903 qpair failed and we were unable to recover it. 00:27:56.903 [2024-12-10 04:14:56.142723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.903 [2024-12-10 04:14:56.142779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.903 [2024-12-10 04:14:56.142792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.903 [2024-12-10 04:14:56.142800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.903 [2024-12-10 04:14:56.142805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:56.903 [2024-12-10 04:14:56.142821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.903 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.152766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.152818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.152832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.152843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.152849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.152864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.162793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.162850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.162864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.162872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.162878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.162894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.172859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.172920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.172935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.172942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.172948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.172964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.182840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.182899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.182913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.182920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.182926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.182941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.192837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.192927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.192940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.192947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.192953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.192971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.202902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.202960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.202974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.202981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.202987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.203002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.212922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.212979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.212993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.213001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.213007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.213022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.222994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.223049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.223063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.223072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.223078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.223093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.232989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.233050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.233064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.233071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.233077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.233093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.242990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.243044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.243057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.243064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.243071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.243086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.253028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.253081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.253096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.253103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.253109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.253124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.263052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.164 [2024-12-10 04:14:56.263109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.164 [2024-12-10 04:14:56.263123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.164 [2024-12-10 04:14:56.263130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.164 [2024-12-10 04:14:56.263136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.164 [2024-12-10 04:14:56.263151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.164 qpair failed and we were unable to recover it. 00:27:57.164 [2024-12-10 04:14:56.273093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.273159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.273176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.273184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.273190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.273205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.283113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.283177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.283194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.283201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.283207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.283223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.293143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.293200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.293213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.293220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.293226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.293242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.303172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.303225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.303239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.303246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.303252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.303266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.313214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.313269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.313282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.313289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.313295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.313310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.323226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.323283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.323296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.323303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.323312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.323327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.333247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.333301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.333315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.333322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.333328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.333343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.343273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.343327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.343341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.343348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.343354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.343369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.353311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.353370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.353383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.353392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.353400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.353415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.363382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.363435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.363448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.363455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.363461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.363476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.373335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.373399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.373412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.373420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.373426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.373441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.383395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.383445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.383459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.383466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.383472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.383486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.393440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.393496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.393510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.393517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.165 [2024-12-10 04:14:56.393523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.165 [2024-12-10 04:14:56.393538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.165 qpair failed and we were unable to recover it. 00:27:57.165 [2024-12-10 04:14:56.403462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.165 [2024-12-10 04:14:56.403523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.165 [2024-12-10 04:14:56.403536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.165 [2024-12-10 04:14:56.403543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.166 [2024-12-10 04:14:56.403549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.166 [2024-12-10 04:14:56.403563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.166 qpair failed and we were unable to recover it. 00:27:57.166 [2024-12-10 04:14:56.413477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.166 [2024-12-10 04:14:56.413538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.166 [2024-12-10 04:14:56.413555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.166 [2024-12-10 04:14:56.413564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.166 [2024-12-10 04:14:56.413570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.166 [2024-12-10 04:14:56.413585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.166 qpair failed and we were unable to recover it. 00:27:57.166 [2024-12-10 04:14:56.423513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.166 [2024-12-10 04:14:56.423569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.166 [2024-12-10 04:14:56.423582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.166 [2024-12-10 04:14:56.423590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.166 [2024-12-10 04:14:56.423596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.166 [2024-12-10 04:14:56.423611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.166 qpair failed and we were unable to recover it. 00:27:57.166 [2024-12-10 04:14:56.433537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.166 [2024-12-10 04:14:56.433592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.166 [2024-12-10 04:14:56.433606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.166 [2024-12-10 04:14:56.433612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.166 [2024-12-10 04:14:56.433619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.166 [2024-12-10 04:14:56.433633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.166 qpair failed and we were unable to recover it. 00:27:57.166 [2024-12-10 04:14:56.443575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.166 [2024-12-10 04:14:56.443632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.166 [2024-12-10 04:14:56.443646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.166 [2024-12-10 04:14:56.443654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.166 [2024-12-10 04:14:56.443661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.166 [2024-12-10 04:14:56.443676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.166 qpair failed and we were unable to recover it. 00:27:57.425 [2024-12-10 04:14:56.453628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.425 [2024-12-10 04:14:56.453681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.425 [2024-12-10 04:14:56.453695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.425 [2024-12-10 04:14:56.453702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.453711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.453727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.463619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.463671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.463685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.463691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.463698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.463713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.473678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.473750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.473763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.473770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.473776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.473791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.483700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.483763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.483786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.483794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.483800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.483819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.493713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.493809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.493824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.493831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.493837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.493853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.503751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.503827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.503841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.503849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.503855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.503870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.513801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.513869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.513882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.513890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.513896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.513911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.523801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.523860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.523874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.523881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.523887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.523902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.533877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.533936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.533950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.533957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.533963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.533980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.543786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.543843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.543860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.543868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.543874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.543888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.553916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.554001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.554014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.554022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.554028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.554044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.563935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.563988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.564002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.564009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.564016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.564031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.573950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.574006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.574019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.574027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.574033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.574049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-12-10 04:14:56.583966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.426 [2024-12-10 04:14:56.584019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.426 [2024-12-10 04:14:56.584032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.426 [2024-12-10 04:14:56.584043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.426 [2024-12-10 04:14:56.584049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.426 [2024-12-10 04:14:56.584064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.594015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.594071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.594085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.594092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.594098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.594114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.604039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.604141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.604156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.604163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.604173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.604189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.614061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.614139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.614153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.614160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.614175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.614192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.624143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.624204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.624218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.624225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.624231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.624247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.634132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.634193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.634206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.634214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.634220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.634235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.644150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.644229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.644243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.644250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.644257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.644273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.654102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.654163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.654180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.654188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.654194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.654210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.664254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.664319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.664334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.664341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.664348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.664363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.674248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.674308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.674323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.674330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.674336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.674350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.684244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.684336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.684349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.684356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.684362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.684377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.694221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.694288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.694302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.694310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.694316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.694331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-12-10 04:14:56.704331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.427 [2024-12-10 04:14:56.704386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.427 [2024-12-10 04:14:56.704399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.427 [2024-12-10 04:14:56.704406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.427 [2024-12-10 04:14:56.704412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.427 [2024-12-10 04:14:56.704428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.687 [2024-12-10 04:14:56.714361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.687 [2024-12-10 04:14:56.714418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.714431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.714441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.714448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.714463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.724393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.724451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.724464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.724472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.724478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.724494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.734407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.734461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.734475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.734481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.734488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.734503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.744433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.744486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.744500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.744507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.744513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.744527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.754470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.754526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.754540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.754546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.754553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.754571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.764510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.764597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.764610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.764618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.764623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.764638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.774588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.774649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.774663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.774670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.774676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.774691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.784604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.784657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.784670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.784677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.784684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.784699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.794626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.794696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.794709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.794716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.794722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.794737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.804635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.804720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.804733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.804741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.804747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.804761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.814641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.814693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.814707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.814714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.814721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.814736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.824662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.824768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.824782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.824789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.824796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.824811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.834742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.688 [2024-12-10 04:14:56.834841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.688 [2024-12-10 04:14:56.834854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.688 [2024-12-10 04:14:56.834861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.688 [2024-12-10 04:14:56.834867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.688 [2024-12-10 04:14:56.834881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.688 qpair failed and we were unable to recover it. 00:27:57.688 [2024-12-10 04:14:56.844778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.844849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.844867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.844875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.844881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.844896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.854751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.854833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.854847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.854854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.854860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.854875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.864781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.864837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.864850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.864857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.864863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.864878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.874825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.874904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.874918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.874925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.874932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.874946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.884857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.884924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.884938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.884945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.884954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.884969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.894817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.894909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.894922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.894929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.894935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.894950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.904916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.904966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.904979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.904987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.904993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.905007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.914885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.914980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.914994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.915001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.915007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.915022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.924949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.925009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.925022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.925029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.925035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.925050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.934940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.934996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.935010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.935017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.935024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.935039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.944995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.945048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.945062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.945069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.945075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.945090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.955043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.955104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.955117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.955125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.955131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.955145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.689 [2024-12-10 04:14:56.965114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.689 [2024-12-10 04:14:56.965179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.689 [2024-12-10 04:14:56.965193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.689 [2024-12-10 04:14:56.965201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.689 [2024-12-10 04:14:56.965207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.689 [2024-12-10 04:14:56.965222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.689 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:56.975138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.950 [2024-12-10 04:14:56.975192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.950 [2024-12-10 04:14:56.975208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.950 [2024-12-10 04:14:56.975216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.950 [2024-12-10 04:14:56.975222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.950 [2024-12-10 04:14:56.975237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.950 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:56.985113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.950 [2024-12-10 04:14:56.985171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.950 [2024-12-10 04:14:56.985184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.950 [2024-12-10 04:14:56.985191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.950 [2024-12-10 04:14:56.985198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.950 [2024-12-10 04:14:56.985213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.950 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:56.995162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.950 [2024-12-10 04:14:56.995239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.950 [2024-12-10 04:14:56.995252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.950 [2024-12-10 04:14:56.995259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.950 [2024-12-10 04:14:56.995265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.950 [2024-12-10 04:14:56.995280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.950 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:57.005193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.950 [2024-12-10 04:14:57.005250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.950 [2024-12-10 04:14:57.005264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.950 [2024-12-10 04:14:57.005271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.950 [2024-12-10 04:14:57.005277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.950 [2024-12-10 04:14:57.005292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.950 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:57.015259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.950 [2024-12-10 04:14:57.015312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.950 [2024-12-10 04:14:57.015325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.950 [2024-12-10 04:14:57.015333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.950 [2024-12-10 04:14:57.015342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.950 [2024-12-10 04:14:57.015357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.950 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:57.025271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.950 [2024-12-10 04:14:57.025321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.950 [2024-12-10 04:14:57.025334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.950 [2024-12-10 04:14:57.025342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.950 [2024-12-10 04:14:57.025348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.950 [2024-12-10 04:14:57.025363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.950 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:57.035304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.950 [2024-12-10 04:14:57.035367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.950 [2024-12-10 04:14:57.035382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.950 [2024-12-10 04:14:57.035389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.950 [2024-12-10 04:14:57.035395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.950 [2024-12-10 04:14:57.035410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.950 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:57.045337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.950 [2024-12-10 04:14:57.045391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.950 [2024-12-10 04:14:57.045405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.950 [2024-12-10 04:14:57.045413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.950 [2024-12-10 04:14:57.045419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.950 [2024-12-10 04:14:57.045434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.950 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:57.055341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.950 [2024-12-10 04:14:57.055396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.950 [2024-12-10 04:14:57.055410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.950 [2024-12-10 04:14:57.055418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.950 [2024-12-10 04:14:57.055424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.950 [2024-12-10 04:14:57.055439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.950 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:57.065360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.950 [2024-12-10 04:14:57.065428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.950 [2024-12-10 04:14:57.065443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.950 [2024-12-10 04:14:57.065450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.950 [2024-12-10 04:14:57.065457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.950 [2024-12-10 04:14:57.065472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.950 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:57.075372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.950 [2024-12-10 04:14:57.075429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.950 [2024-12-10 04:14:57.075442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.950 [2024-12-10 04:14:57.075449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.950 [2024-12-10 04:14:57.075455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.950 [2024-12-10 04:14:57.075470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.950 qpair failed and we were unable to recover it. 00:27:57.950 [2024-12-10 04:14:57.085392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.085478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.085492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.085499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.085504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.085519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.095370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.095429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.095443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.095450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.095456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.095471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.105402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.105454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.105471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.105478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.105484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.105501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.115538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.115603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.115617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.115624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.115630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.115645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.125490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.125549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.125563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.125570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.125577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.125592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.135535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.135588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.135602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.135608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.135615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.135630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.145537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.145616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.145630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.145640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.145646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.145661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.155548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.155605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.155618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.155626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.155632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.155647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.165667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.165732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.165746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.165754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.165760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.165775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.175634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.175685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.175699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.175705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.175711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.175727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.185613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.185665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.185678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.185685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.185691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.185706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.195769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.195832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.195845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.195852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.195859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.195874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.205733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.205815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.205829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.205836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.205842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.951 [2024-12-10 04:14:57.205857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.951 qpair failed and we were unable to recover it. 00:27:57.951 [2024-12-10 04:14:57.215750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.951 [2024-12-10 04:14:57.215842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.951 [2024-12-10 04:14:57.215855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.951 [2024-12-10 04:14:57.215862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.951 [2024-12-10 04:14:57.215869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.952 [2024-12-10 04:14:57.215884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.952 qpair failed and we were unable to recover it. 00:27:57.952 [2024-12-10 04:14:57.225782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.952 [2024-12-10 04:14:57.225840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.952 [2024-12-10 04:14:57.225854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.952 [2024-12-10 04:14:57.225861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.952 [2024-12-10 04:14:57.225867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:57.952 [2024-12-10 04:14:57.225882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.952 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.235887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.235948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.235962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.235969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.235976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.235991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.245889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.245946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.245959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.245967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.245972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.245987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.255888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.255942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.255956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.255963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.255969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.255984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.265886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.265939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.265953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.265960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.265966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.265982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.275972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.276027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.276041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.276051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.276058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.276072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.285989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.286040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.286053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.286060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.286066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.286081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.295981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.296034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.296048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.296055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.296061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.296076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.306039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.306094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.306108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.306116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.306122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.306137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.316084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.316142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.316156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.316163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.316175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.316193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.326064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.326123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.326136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.326143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.326149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.326164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.336124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.336181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.336195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.336203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.336209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.336224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.346145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.346204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.346217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.346225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.346231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.212 [2024-12-10 04:14:57.346247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.212 qpair failed and we were unable to recover it. 00:27:58.212 [2024-12-10 04:14:57.356123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.212 [2024-12-10 04:14:57.356180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.212 [2024-12-10 04:14:57.356194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.212 [2024-12-10 04:14:57.356202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.212 [2024-12-10 04:14:57.356208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.356224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.366215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.366273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.366286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.366293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.366299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.366314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.376281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.376337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.376351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.376358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.376364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.376379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.386208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.386288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.386302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.386309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.386316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.386331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.396250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.396306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.396318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.396325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.396331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.396347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.406334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.406393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.406409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.406417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.406424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.406438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.416346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.416403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.416416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.416423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.416430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.416446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.426393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.426451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.426464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.426471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.426478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.426492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.436416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.436470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.436483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.436490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.436496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.436511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.446491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.446568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.446582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.446589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.446598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.446614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.456473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.456528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.456541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.456549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.456555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.456570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.466504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.466555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.466569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.466576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.466582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.466597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.476537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.476596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.476609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.476616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.213 [2024-12-10 04:14:57.476623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.213 [2024-12-10 04:14:57.476638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.213 qpair failed and we were unable to recover it. 00:27:58.213 [2024-12-10 04:14:57.486569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.213 [2024-12-10 04:14:57.486626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.213 [2024-12-10 04:14:57.486640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.213 [2024-12-10 04:14:57.486647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.214 [2024-12-10 04:14:57.486653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.214 [2024-12-10 04:14:57.486668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.214 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.496612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.474 [2024-12-10 04:14:57.496681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.474 [2024-12-10 04:14:57.496695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.474 [2024-12-10 04:14:57.496703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.474 [2024-12-10 04:14:57.496709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.474 [2024-12-10 04:14:57.496724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.506643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.474 [2024-12-10 04:14:57.506709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.474 [2024-12-10 04:14:57.506722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.474 [2024-12-10 04:14:57.506729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.474 [2024-12-10 04:14:57.506735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.474 [2024-12-10 04:14:57.506750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.516652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.474 [2024-12-10 04:14:57.516707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.474 [2024-12-10 04:14:57.516720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.474 [2024-12-10 04:14:57.516727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.474 [2024-12-10 04:14:57.516733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.474 [2024-12-10 04:14:57.516748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.526714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.474 [2024-12-10 04:14:57.526778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.474 [2024-12-10 04:14:57.526791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.474 [2024-12-10 04:14:57.526799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.474 [2024-12-10 04:14:57.526805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.474 [2024-12-10 04:14:57.526819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.536706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.474 [2024-12-10 04:14:57.536761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.474 [2024-12-10 04:14:57.536781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.474 [2024-12-10 04:14:57.536788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.474 [2024-12-10 04:14:57.536795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.474 [2024-12-10 04:14:57.536811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.546747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.474 [2024-12-10 04:14:57.546800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.474 [2024-12-10 04:14:57.546813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.474 [2024-12-10 04:14:57.546820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.474 [2024-12-10 04:14:57.546826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.474 [2024-12-10 04:14:57.546841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.556770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.474 [2024-12-10 04:14:57.556831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.474 [2024-12-10 04:14:57.556845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.474 [2024-12-10 04:14:57.556853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.474 [2024-12-10 04:14:57.556860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.474 [2024-12-10 04:14:57.556875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.566765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.474 [2024-12-10 04:14:57.566824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.474 [2024-12-10 04:14:57.566838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.474 [2024-12-10 04:14:57.566845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.474 [2024-12-10 04:14:57.566851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.474 [2024-12-10 04:14:57.566866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.576808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.474 [2024-12-10 04:14:57.576867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.474 [2024-12-10 04:14:57.576880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.474 [2024-12-10 04:14:57.576887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.474 [2024-12-10 04:14:57.576897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.474 [2024-12-10 04:14:57.576912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.586838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.474 [2024-12-10 04:14:57.586892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.474 [2024-12-10 04:14:57.586905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.474 [2024-12-10 04:14:57.586912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.474 [2024-12-10 04:14:57.586919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.474 [2024-12-10 04:14:57.586933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.596905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.474 [2024-12-10 04:14:57.596962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.474 [2024-12-10 04:14:57.596975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.474 [2024-12-10 04:14:57.596982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.474 [2024-12-10 04:14:57.596989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.474 [2024-12-10 04:14:57.597005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.474 qpair failed and we were unable to recover it. 00:27:58.474 [2024-12-10 04:14:57.606897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.606948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.606962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.606969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.606976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.606992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.616900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.616962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.616976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.616983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.616989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.617003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.626960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.627015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.627028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.627036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.627042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.627058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.636995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.637050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.637063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.637070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.637076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.637092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.647014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.647070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.647084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.647090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.647096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.647111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.657040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.657096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.657110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.657118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.657123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.657138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.667107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.667205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.667221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.667229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.667234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.667249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.677040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.677127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.677140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.677147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.677153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.677172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.687141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.687201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.687215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.687222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.687229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.687243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.697170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.697244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.697258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.697266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.697271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.697287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.707217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.707274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.707288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.707298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.707305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.707319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.717233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.717290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.717304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.717311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.717317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.717333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.727233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.475 [2024-12-10 04:14:57.727315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.475 [2024-12-10 04:14:57.727329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.475 [2024-12-10 04:14:57.727336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.475 [2024-12-10 04:14:57.727343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.475 [2024-12-10 04:14:57.727357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.475 qpair failed and we were unable to recover it. 00:27:58.475 [2024-12-10 04:14:57.737289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.476 [2024-12-10 04:14:57.737348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.476 [2024-12-10 04:14:57.737362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.476 [2024-12-10 04:14:57.737370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.476 [2024-12-10 04:14:57.737376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.476 [2024-12-10 04:14:57.737392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.476 [2024-12-10 04:14:57.747324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.476 [2024-12-10 04:14:57.747380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.476 [2024-12-10 04:14:57.747394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.476 [2024-12-10 04:14:57.747401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.476 [2024-12-10 04:14:57.747407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.476 [2024-12-10 04:14:57.747425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.476 qpair failed and we were unable to recover it. 00:27:58.736 [2024-12-10 04:14:57.757378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.736 [2024-12-10 04:14:57.757438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.736 [2024-12-10 04:14:57.757452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.736 [2024-12-10 04:14:57.757459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.736 [2024-12-10 04:14:57.757465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.736 [2024-12-10 04:14:57.757480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.736 qpair failed and we were unable to recover it. 00:27:58.736 [2024-12-10 04:14:57.767385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.736 [2024-12-10 04:14:57.767441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.736 [2024-12-10 04:14:57.767454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.736 [2024-12-10 04:14:57.767461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.736 [2024-12-10 04:14:57.767468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.736 [2024-12-10 04:14:57.767482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.736 qpair failed and we were unable to recover it. 00:27:58.736 [2024-12-10 04:14:57.777423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.736 [2024-12-10 04:14:57.777482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.736 [2024-12-10 04:14:57.777496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.736 [2024-12-10 04:14:57.777503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.736 [2024-12-10 04:14:57.777509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.736 [2024-12-10 04:14:57.777523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.736 qpair failed and we were unable to recover it. 00:27:58.736 [2024-12-10 04:14:57.787408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.736 [2024-12-10 04:14:57.787464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.736 [2024-12-10 04:14:57.787477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.736 [2024-12-10 04:14:57.787484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.736 [2024-12-10 04:14:57.787490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.736 [2024-12-10 04:14:57.787506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.736 qpair failed and we were unable to recover it. 00:27:58.736 [2024-12-10 04:14:57.797485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.736 [2024-12-10 04:14:57.797545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.736 [2024-12-10 04:14:57.797559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.736 [2024-12-10 04:14:57.797566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.736 [2024-12-10 04:14:57.797572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.736 [2024-12-10 04:14:57.797586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.736 qpair failed and we were unable to recover it. 00:27:58.736 [2024-12-10 04:14:57.807472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.736 [2024-12-10 04:14:57.807524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.736 [2024-12-10 04:14:57.807536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.736 [2024-12-10 04:14:57.807543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.736 [2024-12-10 04:14:57.807550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.736 [2024-12-10 04:14:57.807565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.736 qpair failed and we were unable to recover it. 00:27:58.736 [2024-12-10 04:14:57.817510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.736 [2024-12-10 04:14:57.817563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.736 [2024-12-10 04:14:57.817577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.736 [2024-12-10 04:14:57.817583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.736 [2024-12-10 04:14:57.817589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.736 [2024-12-10 04:14:57.817604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.736 qpair failed and we were unable to recover it. 00:27:58.736 [2024-12-10 04:14:57.827546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.736 [2024-12-10 04:14:57.827602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.736 [2024-12-10 04:14:57.827616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.736 [2024-12-10 04:14:57.827623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.736 [2024-12-10 04:14:57.827629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.736 [2024-12-10 04:14:57.827644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.736 qpair failed and we were unable to recover it. 00:27:58.736 [2024-12-10 04:14:57.837585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.736 [2024-12-10 04:14:57.837653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.736 [2024-12-10 04:14:57.837667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.736 [2024-12-10 04:14:57.837677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.736 [2024-12-10 04:14:57.837683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.736 [2024-12-10 04:14:57.837699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.736 qpair failed and we were unable to recover it. 00:27:58.736 [2024-12-10 04:14:57.847612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.847667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.847681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.847688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.847694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.847710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.857632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.857688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.857702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.857710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.857717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.857732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.867682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.867749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.867763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.867771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.867777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.867792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.877689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.877748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.877761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.877768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.877774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.877791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.887729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.887809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.887823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.887830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.887836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.887851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.897760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.897827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.897840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.897848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.897854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.897868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.907711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.907768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.907781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.907788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.907794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.907809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.917821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.917887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.917900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.917908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.917914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.917929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.927848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.927904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.927917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.927924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.927931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.927945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.937926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.937981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.937995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.938003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.938010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.938025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.947908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.947962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.947975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.947982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.947988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.948003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.957932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.957986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.957999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.958007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.958013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.958028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.967962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.968017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.968034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.968041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.737 [2024-12-10 04:14:57.968047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.737 [2024-12-10 04:14:57.968062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.737 qpair failed and we were unable to recover it. 00:27:58.737 [2024-12-10 04:14:57.977989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.737 [2024-12-10 04:14:57.978047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.737 [2024-12-10 04:14:57.978060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.737 [2024-12-10 04:14:57.978067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.738 [2024-12-10 04:14:57.978073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.738 [2024-12-10 04:14:57.978088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.738 qpair failed and we were unable to recover it. 00:27:58.738 [2024-12-10 04:14:57.988009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.738 [2024-12-10 04:14:57.988068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.738 [2024-12-10 04:14:57.988081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.738 [2024-12-10 04:14:57.988089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.738 [2024-12-10 04:14:57.988095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.738 [2024-12-10 04:14:57.988110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.738 qpair failed and we were unable to recover it. 00:27:58.738 [2024-12-10 04:14:57.998067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.738 [2024-12-10 04:14:57.998137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.738 [2024-12-10 04:14:57.998150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.738 [2024-12-10 04:14:57.998157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.738 [2024-12-10 04:14:57.998164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.738 [2024-12-10 04:14:57.998184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.738 qpair failed and we were unable to recover it. 00:27:58.738 [2024-12-10 04:14:58.008069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.738 [2024-12-10 04:14:58.008125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.738 [2024-12-10 04:14:58.008138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.738 [2024-12-10 04:14:58.008145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.738 [2024-12-10 04:14:58.008154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.738 [2024-12-10 04:14:58.008173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.738 qpair failed and we were unable to recover it. 00:27:58.998 [2024-12-10 04:14:58.018098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.998 [2024-12-10 04:14:58.018156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.998 [2024-12-10 04:14:58.018173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.998 [2024-12-10 04:14:58.018180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.998 [2024-12-10 04:14:58.018186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.998 [2024-12-10 04:14:58.018201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.998 qpair failed and we were unable to recover it. 00:27:58.998 [2024-12-10 04:14:58.028130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.998 [2024-12-10 04:14:58.028186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.998 [2024-12-10 04:14:58.028199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.998 [2024-12-10 04:14:58.028207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.998 [2024-12-10 04:14:58.028213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.998 [2024-12-10 04:14:58.028228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.998 qpair failed and we were unable to recover it. 00:27:58.998 [2024-12-10 04:14:58.038183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.998 [2024-12-10 04:14:58.038290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.998 [2024-12-10 04:14:58.038304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.998 [2024-12-10 04:14:58.038311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.998 [2024-12-10 04:14:58.038317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.998 [2024-12-10 04:14:58.038332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.998 qpair failed and we were unable to recover it. 00:27:58.998 [2024-12-10 04:14:58.048203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.998 [2024-12-10 04:14:58.048263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.998 [2024-12-10 04:14:58.048276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.998 [2024-12-10 04:14:58.048283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.998 [2024-12-10 04:14:58.048289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.998 [2024-12-10 04:14:58.048304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.998 qpair failed and we were unable to recover it. 00:27:58.998 [2024-12-10 04:14:58.058220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.998 [2024-12-10 04:14:58.058276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.998 [2024-12-10 04:14:58.058290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.998 [2024-12-10 04:14:58.058297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.998 [2024-12-10 04:14:58.058303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.998 [2024-12-10 04:14:58.058318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.998 qpair failed and we were unable to recover it. 00:27:58.998 [2024-12-10 04:14:58.068247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.998 [2024-12-10 04:14:58.068302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.998 [2024-12-10 04:14:58.068315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.998 [2024-12-10 04:14:58.068323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.998 [2024-12-10 04:14:58.068329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.998 [2024-12-10 04:14:58.068345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.998 qpair failed and we were unable to recover it. 00:27:58.998 [2024-12-10 04:14:58.078303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.078358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.078372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.078379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.078385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.078400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.088316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.088374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.088388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.088396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.088402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.088416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.098277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.098335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.098352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.098360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.098366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.098380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.108281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.108335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.108349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.108357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.108363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.108378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.118415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.118477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.118490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.118497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.118503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.118518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.128425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.128481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.128494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.128501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.128507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.128521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.138467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.138530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.138543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.138550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.138559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.138574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.148520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.148576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.148590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.148597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.148603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.148618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.158441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.158507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.158520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.158528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.158534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.158549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.168548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.168607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.168621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.168629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.168635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.168650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.178539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.178640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.178653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.178661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.178666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.178681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.188576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.188633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.188647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.188653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.188660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.188674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.198619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.198673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.198687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.198693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.999 [2024-12-10 04:14:58.198700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:58.999 [2024-12-10 04:14:58.198716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.999 qpair failed and we were unable to recover it. 00:27:58.999 [2024-12-10 04:14:58.208715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.999 [2024-12-10 04:14:58.208773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.999 [2024-12-10 04:14:58.208794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.999 [2024-12-10 04:14:58.208802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.000 [2024-12-10 04:14:58.208809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.000 [2024-12-10 04:14:58.208828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.000 qpair failed and we were unable to recover it. 00:27:59.000 [2024-12-10 04:14:58.218679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.000 [2024-12-10 04:14:58.218780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.000 [2024-12-10 04:14:58.218796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.000 [2024-12-10 04:14:58.218803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.000 [2024-12-10 04:14:58.218809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.000 [2024-12-10 04:14:58.218824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.000 qpair failed and we were unable to recover it. 00:27:59.000 [2024-12-10 04:14:58.228711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.000 [2024-12-10 04:14:58.228780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.000 [2024-12-10 04:14:58.228799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.000 [2024-12-10 04:14:58.228806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.000 [2024-12-10 04:14:58.228812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.000 [2024-12-10 04:14:58.228828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.000 qpair failed and we were unable to recover it. 00:27:59.000 [2024-12-10 04:14:58.238734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.000 [2024-12-10 04:14:58.238789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.000 [2024-12-10 04:14:58.238803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.000 [2024-12-10 04:14:58.238810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.000 [2024-12-10 04:14:58.238816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.000 [2024-12-10 04:14:58.238831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.000 qpair failed and we were unable to recover it. 00:27:59.000 [2024-12-10 04:14:58.248765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.000 [2024-12-10 04:14:58.248833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.000 [2024-12-10 04:14:58.248847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.000 [2024-12-10 04:14:58.248854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.000 [2024-12-10 04:14:58.248861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.000 [2024-12-10 04:14:58.248875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.000 qpair failed and we were unable to recover it. 00:27:59.000 [2024-12-10 04:14:58.258787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.000 [2024-12-10 04:14:58.258842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.000 [2024-12-10 04:14:58.258856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.000 [2024-12-10 04:14:58.258863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.000 [2024-12-10 04:14:58.258869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.000 [2024-12-10 04:14:58.258884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.000 qpair failed and we were unable to recover it. 00:27:59.000 [2024-12-10 04:14:58.268813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.000 [2024-12-10 04:14:58.268865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.000 [2024-12-10 04:14:58.268878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.000 [2024-12-10 04:14:58.268889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.000 [2024-12-10 04:14:58.268895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.000 [2024-12-10 04:14:58.268910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.000 qpair failed and we were unable to recover it. 00:27:59.000 [2024-12-10 04:14:58.278855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.000 [2024-12-10 04:14:58.278933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.000 [2024-12-10 04:14:58.278946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.000 [2024-12-10 04:14:58.278953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.000 [2024-12-10 04:14:58.278959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.000 [2024-12-10 04:14:58.278974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.000 qpair failed and we were unable to recover it. 00:27:59.266 [2024-12-10 04:14:58.288872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.288930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.288943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.288950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.288956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.288971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.298907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.298971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.298984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.298992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.298998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.299013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.308985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.309088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.309101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.309108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.309114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.309132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.318968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.319028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.319042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.319049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.319055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.319070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.328984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.329040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.329054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.329062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.329068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.329083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.339013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.339068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.339081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.339089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.339095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.339111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.349103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.349205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.349220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.349227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.349233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.349248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.359090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.359156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.359173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.359180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.359186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.359201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.369030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.369085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.369099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.369107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.369114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.369129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.379122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.379179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.379193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.379200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.379206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.379222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.389160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.389217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.389230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.389237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.389243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.389258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.399195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.267 [2024-12-10 04:14:58.399250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.267 [2024-12-10 04:14:58.399264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.267 [2024-12-10 04:14:58.399274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.267 [2024-12-10 04:14:58.399280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.267 [2024-12-10 04:14:58.399295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.267 qpair failed and we were unable to recover it. 00:27:59.267 [2024-12-10 04:14:58.409215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.409274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.409287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.409295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.409301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.409316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.419242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.419293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.419307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.419314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.419320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.419335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.429268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.429329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.429342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.429350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.429355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.429370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.439304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.439364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.439378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.439385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.439391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.439409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.449380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.449441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.449454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.449462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.449468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.449483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.459277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.459348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.459362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.459369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.459375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.459390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.469395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.469448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.469461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.469468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.469474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.469489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.479414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.479490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.479503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.479510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.479516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.479531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.489380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.489468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.489482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.489489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.489495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.489511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.499483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.499568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.499582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.499589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.499595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.499610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.509415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.509473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.509486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.509494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.509500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.509514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.519525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.519581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.519596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.519603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.519610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.519624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.529473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.529539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.529556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.268 [2024-12-10 04:14:58.529564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.268 [2024-12-10 04:14:58.529569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.268 [2024-12-10 04:14:58.529584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.268 qpair failed and we were unable to recover it. 00:27:59.268 [2024-12-10 04:14:58.539510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.268 [2024-12-10 04:14:58.539584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.268 [2024-12-10 04:14:58.539598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.269 [2024-12-10 04:14:58.539605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.269 [2024-12-10 04:14:58.539611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.269 [2024-12-10 04:14:58.539625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.269 qpair failed and we were unable to recover it. 00:27:59.529 [2024-12-10 04:14:58.549537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.529 [2024-12-10 04:14:58.549590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.529 [2024-12-10 04:14:58.549603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.529 [2024-12-10 04:14:58.549610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.529 [2024-12-10 04:14:58.549616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.529 [2024-12-10 04:14:58.549631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.529 qpair failed and we were unable to recover it. 00:27:59.529 [2024-12-10 04:14:58.559588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.529 [2024-12-10 04:14:58.559672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.529 [2024-12-10 04:14:58.559686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.529 [2024-12-10 04:14:58.559693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.529 [2024-12-10 04:14:58.559699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.529 [2024-12-10 04:14:58.559713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.529 qpair failed and we were unable to recover it. 00:27:59.529 [2024-12-10 04:14:58.569709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.529 [2024-12-10 04:14:58.569795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.529 [2024-12-10 04:14:58.569809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.529 [2024-12-10 04:14:58.569816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.529 [2024-12-10 04:14:58.569825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.529 [2024-12-10 04:14:58.569839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.529 qpair failed and we were unable to recover it. 00:27:59.529 [2024-12-10 04:14:58.579649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.529 [2024-12-10 04:14:58.579736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.529 [2024-12-10 04:14:58.579750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.529 [2024-12-10 04:14:58.579757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.529 [2024-12-10 04:14:58.579763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.529 [2024-12-10 04:14:58.579778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.529 qpair failed and we were unable to recover it. 00:27:59.529 [2024-12-10 04:14:58.589735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.529 [2024-12-10 04:14:58.589789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.529 [2024-12-10 04:14:58.589803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.529 [2024-12-10 04:14:58.589810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.529 [2024-12-10 04:14:58.589816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.529 [2024-12-10 04:14:58.589831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.529 qpair failed and we were unable to recover it. 00:27:59.529 [2024-12-10 04:14:58.599728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.599815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.599830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.599836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.599843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.599857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.609703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.609762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.609775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.609782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.609788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.609803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.619719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.619775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.619788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.619795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.619801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.619816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.629767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.629857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.629871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.629878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.629884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.629898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.639800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.639860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.639873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.639880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.639887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.639902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.649911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.649962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.649975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.649982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.649988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.650003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.659931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.659988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.660005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.660013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.660019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.660033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.670005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.670110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.670125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.670132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.670138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.670154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.679939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.679994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.680008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.680015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.680021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.680036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.690042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.690131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.690145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.690152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.690158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.690177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.700035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.700091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.700104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.700111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.700121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.700136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.710062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.710118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.710132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.710140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.710146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.710161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.720085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.720140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.720154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.720161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.720172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.530 [2024-12-10 04:14:58.720187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.530 qpair failed and we were unable to recover it. 00:27:59.530 [2024-12-10 04:14:58.730119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.530 [2024-12-10 04:14:58.730197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.530 [2024-12-10 04:14:58.730211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.530 [2024-12-10 04:14:58.730218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.530 [2024-12-10 04:14:58.730224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.531 [2024-12-10 04:14:58.730239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.531 qpair failed and we were unable to recover it. 00:27:59.531 [2024-12-10 04:14:58.740156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.531 [2024-12-10 04:14:58.740214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.531 [2024-12-10 04:14:58.740228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.531 [2024-12-10 04:14:58.740235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.531 [2024-12-10 04:14:58.740241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.531 [2024-12-10 04:14:58.740256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.531 qpair failed and we were unable to recover it. 00:27:59.531 [2024-12-10 04:14:58.750182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.531 [2024-12-10 04:14:58.750239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.531 [2024-12-10 04:14:58.750253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.531 [2024-12-10 04:14:58.750260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.531 [2024-12-10 04:14:58.750267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.531 [2024-12-10 04:14:58.750282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.531 qpair failed and we were unable to recover it. 00:27:59.531 [2024-12-10 04:14:58.760214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.531 [2024-12-10 04:14:58.760274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.531 [2024-12-10 04:14:58.760288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.531 [2024-12-10 04:14:58.760295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.531 [2024-12-10 04:14:58.760302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.531 [2024-12-10 04:14:58.760317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.531 qpair failed and we were unable to recover it. 00:27:59.531 [2024-12-10 04:14:58.770474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.531 [2024-12-10 04:14:58.770540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.531 [2024-12-10 04:14:58.770554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.531 [2024-12-10 04:14:58.770561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.531 [2024-12-10 04:14:58.770567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.531 [2024-12-10 04:14:58.770582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.531 qpair failed and we were unable to recover it. 00:27:59.531 [2024-12-10 04:14:58.780273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.531 [2024-12-10 04:14:58.780325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.531 [2024-12-10 04:14:58.780339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.531 [2024-12-10 04:14:58.780345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.531 [2024-12-10 04:14:58.780352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.531 [2024-12-10 04:14:58.780367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.531 qpair failed and we were unable to recover it. 00:27:59.531 [2024-12-10 04:14:58.790303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.531 [2024-12-10 04:14:58.790360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.531 [2024-12-10 04:14:58.790376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.531 [2024-12-10 04:14:58.790383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.531 [2024-12-10 04:14:58.790390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.531 [2024-12-10 04:14:58.790405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.531 qpair failed and we were unable to recover it. 00:27:59.531 [2024-12-10 04:14:58.800336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.531 [2024-12-10 04:14:58.800399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.531 [2024-12-10 04:14:58.800413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.531 [2024-12-10 04:14:58.800421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.531 [2024-12-10 04:14:58.800427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.531 [2024-12-10 04:14:58.800442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.531 qpair failed and we were unable to recover it. 00:27:59.531 [2024-12-10 04:14:58.810307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.792 [2024-12-10 04:14:58.810367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.792 [2024-12-10 04:14:58.810381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.792 [2024-12-10 04:14:58.810389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.792 [2024-12-10 04:14:58.810396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.792 [2024-12-10 04:14:58.810411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.792 qpair failed and we were unable to recover it. 00:27:59.792 [2024-12-10 04:14:58.820377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.792 [2024-12-10 04:14:58.820435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.792 [2024-12-10 04:14:58.820448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.792 [2024-12-10 04:14:58.820456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.792 [2024-12-10 04:14:58.820462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.792 [2024-12-10 04:14:58.820477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.792 qpair failed and we were unable to recover it. 00:27:59.792 [2024-12-10 04:14:58.830374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.792 [2024-12-10 04:14:58.830428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.792 [2024-12-10 04:14:58.830442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.792 [2024-12-10 04:14:58.830452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.792 [2024-12-10 04:14:58.830458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.792 [2024-12-10 04:14:58.830473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.792 qpair failed and we were unable to recover it. 00:27:59.792 [2024-12-10 04:14:58.840436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.792 [2024-12-10 04:14:58.840492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.792 [2024-12-10 04:14:58.840506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.792 [2024-12-10 04:14:58.840513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.792 [2024-12-10 04:14:58.840519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.792 [2024-12-10 04:14:58.840533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.792 qpair failed and we were unable to recover it. 00:27:59.792 [2024-12-10 04:14:58.850499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.792 [2024-12-10 04:14:58.850559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.792 [2024-12-10 04:14:58.850572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.792 [2024-12-10 04:14:58.850580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.792 [2024-12-10 04:14:58.850586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.792 [2024-12-10 04:14:58.850600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.792 qpair failed and we were unable to recover it. 00:27:59.792 [2024-12-10 04:14:58.860452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.792 [2024-12-10 04:14:58.860507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.792 [2024-12-10 04:14:58.860521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.792 [2024-12-10 04:14:58.860529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.792 [2024-12-10 04:14:58.860535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.792 [2024-12-10 04:14:58.860549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.792 qpair failed and we were unable to recover it. 00:27:59.792 [2024-12-10 04:14:58.870468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.792 [2024-12-10 04:14:58.870550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.792 [2024-12-10 04:14:58.870563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.792 [2024-12-10 04:14:58.870570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.792 [2024-12-10 04:14:58.870576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.792 [2024-12-10 04:14:58.870594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.792 qpair failed and we were unable to recover it. 00:27:59.792 [2024-12-10 04:14:58.880552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.792 [2024-12-10 04:14:58.880611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.792 [2024-12-10 04:14:58.880624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.792 [2024-12-10 04:14:58.880631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.880638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.880652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:58.890575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:58.890632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:58.890645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:58.890653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.890660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.890674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:58.900609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:58.900666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:58.900679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:58.900686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.900693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.900708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:58.910633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:58.910683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:58.910696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:58.910704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.910710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.910724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:58.920682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:58.920746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:58.920759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:58.920766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.920772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.920787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:58.930726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:58.930782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:58.930795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:58.930802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.930809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.930823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:58.940719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:58.940780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:58.940793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:58.940801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.940807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.940822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:58.950793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:58.950858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:58.950871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:58.950879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.950885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.950899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:58.960791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:58.960848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:58.960862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:58.960874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.960880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.960894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:58.970814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:58.970872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:58.970886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:58.970894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.970901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.970916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:58.980832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:58.980891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:58.980904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:58.980911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.980917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.980931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:58.990862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:58.990920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:58.990934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:58.990941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:58.990947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:58.990962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:59.000854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:59.000920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:59.000935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:59.000943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:59.000949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.793 [2024-12-10 04:14:59.000968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.793 qpair failed and we were unable to recover it. 00:27:59.793 [2024-12-10 04:14:59.010937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.793 [2024-12-10 04:14:59.010993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.793 [2024-12-10 04:14:59.011008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.793 [2024-12-10 04:14:59.011015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.793 [2024-12-10 04:14:59.011022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.794 [2024-12-10 04:14:59.011037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.794 qpair failed and we were unable to recover it. 00:27:59.794 [2024-12-10 04:14:59.020952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.794 [2024-12-10 04:14:59.021008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.794 [2024-12-10 04:14:59.021022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.794 [2024-12-10 04:14:59.021030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.794 [2024-12-10 04:14:59.021036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.794 [2024-12-10 04:14:59.021051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.794 qpair failed and we were unable to recover it. 00:27:59.794 [2024-12-10 04:14:59.031021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.794 [2024-12-10 04:14:59.031078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.794 [2024-12-10 04:14:59.031091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.794 [2024-12-10 04:14:59.031099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.794 [2024-12-10 04:14:59.031105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.794 [2024-12-10 04:14:59.031120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.794 qpair failed and we were unable to recover it. 00:27:59.794 [2024-12-10 04:14:59.041017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.794 [2024-12-10 04:14:59.041070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.794 [2024-12-10 04:14:59.041083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.794 [2024-12-10 04:14:59.041089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.794 [2024-12-10 04:14:59.041097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.794 [2024-12-10 04:14:59.041111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.794 qpair failed and we were unable to recover it. 00:27:59.794 [2024-12-10 04:14:59.051035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.794 [2024-12-10 04:14:59.051089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.794 [2024-12-10 04:14:59.051102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.794 [2024-12-10 04:14:59.051109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.794 [2024-12-10 04:14:59.051116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.794 [2024-12-10 04:14:59.051131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.794 qpair failed and we were unable to recover it. 00:27:59.794 [2024-12-10 04:14:59.061055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.794 [2024-12-10 04:14:59.061111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.794 [2024-12-10 04:14:59.061125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.794 [2024-12-10 04:14:59.061133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.794 [2024-12-10 04:14:59.061139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.794 [2024-12-10 04:14:59.061154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.794 qpair failed and we were unable to recover it. 00:27:59.794 [2024-12-10 04:14:59.071106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.794 [2024-12-10 04:14:59.071171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.794 [2024-12-10 04:14:59.071185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.794 [2024-12-10 04:14:59.071193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.794 [2024-12-10 04:14:59.071200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:27:59.794 [2024-12-10 04:14:59.071214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.794 qpair failed and we were unable to recover it. 00:28:00.054 [2024-12-10 04:14:59.081122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.054 [2024-12-10 04:14:59.081177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.054 [2024-12-10 04:14:59.081191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.054 [2024-12-10 04:14:59.081198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.054 [2024-12-10 04:14:59.081204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.054 [2024-12-10 04:14:59.081219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.054 qpair failed and we were unable to recover it. 00:28:00.054 [2024-12-10 04:14:59.091147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.054 [2024-12-10 04:14:59.091203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.054 [2024-12-10 04:14:59.091219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.054 [2024-12-10 04:14:59.091227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.054 [2024-12-10 04:14:59.091233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.054 [2024-12-10 04:14:59.091248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.054 qpair failed and we were unable to recover it. 00:28:00.054 [2024-12-10 04:14:59.101213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.101274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.101287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.101294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.101301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.101315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.111204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.111263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.111278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.111286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.111293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.111308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.121266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.121321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.121336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.121344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.121350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.121366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.131258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.131316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.131329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.131336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.131346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.131360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.141283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.141340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.141354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.141362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.141368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.141383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.151328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.151385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.151398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.151405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.151411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.151426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.161361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.161439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.161452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.161459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.161466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.161481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.171366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.171420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.171433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.171440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.171446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.171461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.181393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.181448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.181462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.181469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.181476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.181491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.191435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.191494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.191507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.191515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.191521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.191535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.201475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.201534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.201548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.201555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.201561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.201576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.211512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.211578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.211592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.211600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.211607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.211622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.221513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.221573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.221589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.055 [2024-12-10 04:14:59.221597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.055 [2024-12-10 04:14:59.221602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.055 [2024-12-10 04:14:59.221617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.055 qpair failed and we were unable to recover it. 00:28:00.055 [2024-12-10 04:14:59.231562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.055 [2024-12-10 04:14:59.231621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.055 [2024-12-10 04:14:59.231635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.056 [2024-12-10 04:14:59.231642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.056 [2024-12-10 04:14:59.231648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.056 [2024-12-10 04:14:59.231663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.056 qpair failed and we were unable to recover it. 00:28:00.056 [2024-12-10 04:14:59.241593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.056 [2024-12-10 04:14:59.241650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.056 [2024-12-10 04:14:59.241663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.056 [2024-12-10 04:14:59.241670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.056 [2024-12-10 04:14:59.241677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.056 [2024-12-10 04:14:59.241692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.056 qpair failed and we were unable to recover it. 00:28:00.056 [2024-12-10 04:14:59.251649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.056 [2024-12-10 04:14:59.251701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.056 [2024-12-10 04:14:59.251714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.056 [2024-12-10 04:14:59.251722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.056 [2024-12-10 04:14:59.251728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.056 [2024-12-10 04:14:59.251743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.056 qpair failed and we were unable to recover it. 00:28:00.056 [2024-12-10 04:14:59.261627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.056 [2024-12-10 04:14:59.261681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.056 [2024-12-10 04:14:59.261694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.056 [2024-12-10 04:14:59.261702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.056 [2024-12-10 04:14:59.261711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.056 [2024-12-10 04:14:59.261726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.056 qpair failed and we were unable to recover it. 00:28:00.056 [2024-12-10 04:14:59.271654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.056 [2024-12-10 04:14:59.271704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.056 [2024-12-10 04:14:59.271717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.056 [2024-12-10 04:14:59.271724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.056 [2024-12-10 04:14:59.271731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.056 [2024-12-10 04:14:59.271746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.056 qpair failed and we were unable to recover it. 00:28:00.056 [2024-12-10 04:14:59.281695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.056 [2024-12-10 04:14:59.281752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.056 [2024-12-10 04:14:59.281766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.056 [2024-12-10 04:14:59.281773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.056 [2024-12-10 04:14:59.281779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.056 [2024-12-10 04:14:59.281794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.056 qpair failed and we were unable to recover it. 00:28:00.056 [2024-12-10 04:14:59.291717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.056 [2024-12-10 04:14:59.291769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.056 [2024-12-10 04:14:59.291783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.056 [2024-12-10 04:14:59.291790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.056 [2024-12-10 04:14:59.291796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.056 [2024-12-10 04:14:59.291811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.056 qpair failed and we were unable to recover it. 00:28:00.056 [2024-12-10 04:14:59.301732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.056 [2024-12-10 04:14:59.301785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.056 [2024-12-10 04:14:59.301799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.056 [2024-12-10 04:14:59.301806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.056 [2024-12-10 04:14:59.301812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.056 [2024-12-10 04:14:59.301827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.056 qpair failed and we were unable to recover it. 00:28:00.056 [2024-12-10 04:14:59.311767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.056 [2024-12-10 04:14:59.311821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.056 [2024-12-10 04:14:59.311835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.056 [2024-12-10 04:14:59.311842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.056 [2024-12-10 04:14:59.311849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.056 [2024-12-10 04:14:59.311863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.056 qpair failed and we were unable to recover it. 00:28:00.056 [2024-12-10 04:14:59.321789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.056 [2024-12-10 04:14:59.321844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.056 [2024-12-10 04:14:59.321857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.056 [2024-12-10 04:14:59.321865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.056 [2024-12-10 04:14:59.321871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.056 [2024-12-10 04:14:59.321886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.056 qpair failed and we were unable to recover it. 00:28:00.056 [2024-12-10 04:14:59.331811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.056 [2024-12-10 04:14:59.331918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.056 [2024-12-10 04:14:59.331931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.056 [2024-12-10 04:14:59.331938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.056 [2024-12-10 04:14:59.331944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.056 [2024-12-10 04:14:59.331958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.056 qpair failed and we were unable to recover it. 00:28:00.324 [2024-12-10 04:14:59.341898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.324 [2024-12-10 04:14:59.341951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.324 [2024-12-10 04:14:59.341964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.324 [2024-12-10 04:14:59.341971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.324 [2024-12-10 04:14:59.341977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.324 [2024-12-10 04:14:59.341992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.324 qpair failed and we were unable to recover it. 00:28:00.324 [2024-12-10 04:14:59.351815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.324 [2024-12-10 04:14:59.351917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.324 [2024-12-10 04:14:59.351931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.324 [2024-12-10 04:14:59.351938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.324 [2024-12-10 04:14:59.351944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.324 [2024-12-10 04:14:59.351959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.324 qpair failed and we were unable to recover it. 00:28:00.324 [2024-12-10 04:14:59.361922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.324 [2024-12-10 04:14:59.361980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.324 [2024-12-10 04:14:59.361994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.324 [2024-12-10 04:14:59.362002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.324 [2024-12-10 04:14:59.362009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.324 [2024-12-10 04:14:59.362025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.324 qpair failed and we were unable to recover it. 00:28:00.324 [2024-12-10 04:14:59.371992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.324 [2024-12-10 04:14:59.372057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.324 [2024-12-10 04:14:59.372071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.324 [2024-12-10 04:14:59.372079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.324 [2024-12-10 04:14:59.372085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.324 [2024-12-10 04:14:59.372100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.324 qpair failed and we were unable to recover it. 00:28:00.324 [2024-12-10 04:14:59.381963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.324 [2024-12-10 04:14:59.382017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.324 [2024-12-10 04:14:59.382031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.324 [2024-12-10 04:14:59.382038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.324 [2024-12-10 04:14:59.382044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.324 [2024-12-10 04:14:59.382059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.324 qpair failed and we were unable to recover it. 00:28:00.324 [2024-12-10 04:14:59.391999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.324 [2024-12-10 04:14:59.392072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.324 [2024-12-10 04:14:59.392086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.324 [2024-12-10 04:14:59.392096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.324 [2024-12-10 04:14:59.392103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.324 [2024-12-10 04:14:59.392118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.324 qpair failed and we were unable to recover it. 00:28:00.324 [2024-12-10 04:14:59.402043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.324 [2024-12-10 04:14:59.402100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.324 [2024-12-10 04:14:59.402114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.324 [2024-12-10 04:14:59.402121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.324 [2024-12-10 04:14:59.402127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.324 [2024-12-10 04:14:59.402142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.324 qpair failed and we were unable to recover it. 00:28:00.324 [2024-12-10 04:14:59.412065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.324 [2024-12-10 04:14:59.412122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.324 [2024-12-10 04:14:59.412136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.324 [2024-12-10 04:14:59.412143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.324 [2024-12-10 04:14:59.412150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.324 [2024-12-10 04:14:59.412165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.324 qpair failed and we were unable to recover it. 00:28:00.325 [2024-12-10 04:14:59.422103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.325 [2024-12-10 04:14:59.422158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.325 [2024-12-10 04:14:59.422176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.325 [2024-12-10 04:14:59.422184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.325 [2024-12-10 04:14:59.422191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.325 [2024-12-10 04:14:59.422206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.325 qpair failed and we were unable to recover it. 00:28:00.325 [2024-12-10 04:14:59.432134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.325 [2024-12-10 04:14:59.432192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.325 [2024-12-10 04:14:59.432205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.325 [2024-12-10 04:14:59.432214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.325 [2024-12-10 04:14:59.432220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.325 [2024-12-10 04:14:59.432239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.325 qpair failed and we were unable to recover it. 00:28:00.325 [2024-12-10 04:14:59.442148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.325 [2024-12-10 04:14:59.442210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.325 [2024-12-10 04:14:59.442223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.325 [2024-12-10 04:14:59.442231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.325 [2024-12-10 04:14:59.442237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.325 [2024-12-10 04:14:59.442252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.325 qpair failed and we were unable to recover it. 00:28:00.325 [2024-12-10 04:14:59.452177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.325 [2024-12-10 04:14:59.452234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.325 [2024-12-10 04:14:59.452248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.325 [2024-12-10 04:14:59.452255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.325 [2024-12-10 04:14:59.452261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.325 [2024-12-10 04:14:59.452277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.325 qpair failed and we were unable to recover it. 00:28:00.325 [2024-12-10 04:14:59.462201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.325 [2024-12-10 04:14:59.462249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.325 [2024-12-10 04:14:59.462262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.325 [2024-12-10 04:14:59.462269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.325 [2024-12-10 04:14:59.462275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.325 [2024-12-10 04:14:59.462290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.325 qpair failed and we were unable to recover it. 00:28:00.325 [2024-12-10 04:14:59.472259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.325 [2024-12-10 04:14:59.472316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.325 [2024-12-10 04:14:59.472330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.325 [2024-12-10 04:14:59.472338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.325 [2024-12-10 04:14:59.472344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.325 [2024-12-10 04:14:59.472358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.325 qpair failed and we were unable to recover it. 00:28:00.325 [2024-12-10 04:14:59.482323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.325 [2024-12-10 04:14:59.482428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.325 [2024-12-10 04:14:59.482442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.325 [2024-12-10 04:14:59.482449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.325 [2024-12-10 04:14:59.482455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.325 [2024-12-10 04:14:59.482470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.325 qpair failed and we were unable to recover it. 00:28:00.325 [2024-12-10 04:14:59.492304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.325 [2024-12-10 04:14:59.492364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.325 [2024-12-10 04:14:59.492378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.325 [2024-12-10 04:14:59.492385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.325 [2024-12-10 04:14:59.492391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.325 [2024-12-10 04:14:59.492406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.326 qpair failed and we were unable to recover it. 00:28:00.326 [2024-12-10 04:14:59.502338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.326 [2024-12-10 04:14:59.502435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.326 [2024-12-10 04:14:59.502448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.326 [2024-12-10 04:14:59.502455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.326 [2024-12-10 04:14:59.502461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.326 [2024-12-10 04:14:59.502476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.326 qpair failed and we were unable to recover it. 00:28:00.326 [2024-12-10 04:14:59.512353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.326 [2024-12-10 04:14:59.512409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.326 [2024-12-10 04:14:59.512422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.326 [2024-12-10 04:14:59.512430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.326 [2024-12-10 04:14:59.512436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.326 [2024-12-10 04:14:59.512451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.326 qpair failed and we were unable to recover it. 00:28:00.326 [2024-12-10 04:14:59.522370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.326 [2024-12-10 04:14:59.522428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.326 [2024-12-10 04:14:59.522442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.326 [2024-12-10 04:14:59.522452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.326 [2024-12-10 04:14:59.522459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.326 [2024-12-10 04:14:59.522474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.326 qpair failed and we were unable to recover it. 00:28:00.326 [2024-12-10 04:14:59.532427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.326 [2024-12-10 04:14:59.532484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.326 [2024-12-10 04:14:59.532497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.326 [2024-12-10 04:14:59.532504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.326 [2024-12-10 04:14:59.532510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.326 [2024-12-10 04:14:59.532525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.326 qpair failed and we were unable to recover it. 00:28:00.326 [2024-12-10 04:14:59.542457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.326 [2024-12-10 04:14:59.542511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.326 [2024-12-10 04:14:59.542525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.326 [2024-12-10 04:14:59.542532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.326 [2024-12-10 04:14:59.542538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.326 [2024-12-10 04:14:59.542553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.326 qpair failed and we were unable to recover it. 00:28:00.326 [2024-12-10 04:14:59.552489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.326 [2024-12-10 04:14:59.552548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.326 [2024-12-10 04:14:59.552561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.326 [2024-12-10 04:14:59.552568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.326 [2024-12-10 04:14:59.552574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.326 [2024-12-10 04:14:59.552588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.326 qpair failed and we were unable to recover it. 00:28:00.326 [2024-12-10 04:14:59.562485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.326 [2024-12-10 04:14:59.562543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.326 [2024-12-10 04:14:59.562556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.326 [2024-12-10 04:14:59.562563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.326 [2024-12-10 04:14:59.562570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.326 [2024-12-10 04:14:59.562587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.326 qpair failed and we were unable to recover it. 00:28:00.326 [2024-12-10 04:14:59.572541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.326 [2024-12-10 04:14:59.572594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.326 [2024-12-10 04:14:59.572607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.326 [2024-12-10 04:14:59.572613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.326 [2024-12-10 04:14:59.572619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.326 [2024-12-10 04:14:59.572634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.326 qpair failed and we were unable to recover it. 00:28:00.326 [2024-12-10 04:14:59.582556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.326 [2024-12-10 04:14:59.582614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.326 [2024-12-10 04:14:59.582626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.326 [2024-12-10 04:14:59.582633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.326 [2024-12-10 04:14:59.582639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.326 [2024-12-10 04:14:59.582654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.326 qpair failed and we were unable to recover it. 00:28:00.326 [2024-12-10 04:14:59.592597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.326 [2024-12-10 04:14:59.592651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.327 [2024-12-10 04:14:59.592665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.327 [2024-12-10 04:14:59.592673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.327 [2024-12-10 04:14:59.592678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.327 [2024-12-10 04:14:59.592693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.327 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-10 04:14:59.602686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.587 [2024-12-10 04:14:59.602741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.587 [2024-12-10 04:14:59.602755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.587 [2024-12-10 04:14:59.602762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.587 [2024-12-10 04:14:59.602768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.587 [2024-12-10 04:14:59.602783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-10 04:14:59.612647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.587 [2024-12-10 04:14:59.612700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.587 [2024-12-10 04:14:59.612714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.587 [2024-12-10 04:14:59.612722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.587 [2024-12-10 04:14:59.612729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.587 [2024-12-10 04:14:59.612745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-10 04:14:59.622612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.587 [2024-12-10 04:14:59.622667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.587 [2024-12-10 04:14:59.622681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.587 [2024-12-10 04:14:59.622688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.587 [2024-12-10 04:14:59.622694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.587 [2024-12-10 04:14:59.622709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-10 04:14:59.632635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.587 [2024-12-10 04:14:59.632693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.587 [2024-12-10 04:14:59.632708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.587 [2024-12-10 04:14:59.632716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.587 [2024-12-10 04:14:59.632722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.587 [2024-12-10 04:14:59.632736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-10 04:14:59.642742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.587 [2024-12-10 04:14:59.642797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.587 [2024-12-10 04:14:59.642811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.587 [2024-12-10 04:14:59.642818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.587 [2024-12-10 04:14:59.642824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.587 [2024-12-10 04:14:59.642838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-10 04:14:59.652759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.587 [2024-12-10 04:14:59.652816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.587 [2024-12-10 04:14:59.652833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.587 [2024-12-10 04:14:59.652840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.587 [2024-12-10 04:14:59.652846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.587 [2024-12-10 04:14:59.652861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-10 04:14:59.662782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.662870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.662884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.662892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.662898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.662913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.672810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.672866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.672880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.672887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.672893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.672907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.682868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.682924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.682938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.682946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.682952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.682967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.692881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.692939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.692953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.692960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.692972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.692987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.702886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.702939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.702953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.702960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.702966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.702982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.712926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.712980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.712993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.713000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.713007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.713022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.722953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.723011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.723025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.723032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.723038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.723053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.732983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.733037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.733050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.733057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.733064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.733080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.743050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.743147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.743160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.743170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.743177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.743192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.753025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.753079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.753093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.753099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.753106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.753121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.763071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.763127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.763141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.763148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.763154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.763173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.773103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.773155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.773171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.773179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.773185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.773199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.783124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.783179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.588 [2024-12-10 04:14:59.783196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.588 [2024-12-10 04:14:59.783203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.588 [2024-12-10 04:14:59.783210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.588 [2024-12-10 04:14:59.783225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-10 04:14:59.793066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.588 [2024-12-10 04:14:59.793129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.589 [2024-12-10 04:14:59.793142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.589 [2024-12-10 04:14:59.793150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.589 [2024-12-10 04:14:59.793156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.589 [2024-12-10 04:14:59.793176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-10 04:14:59.803181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.589 [2024-12-10 04:14:59.803248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.589 [2024-12-10 04:14:59.803261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.589 [2024-12-10 04:14:59.803269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.589 [2024-12-10 04:14:59.803275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.589 [2024-12-10 04:14:59.803291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-10 04:14:59.813241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.589 [2024-12-10 04:14:59.813346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.589 [2024-12-10 04:14:59.813359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.589 [2024-12-10 04:14:59.813367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.589 [2024-12-10 04:14:59.813372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.589 [2024-12-10 04:14:59.813388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-10 04:14:59.823233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.589 [2024-12-10 04:14:59.823292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.589 [2024-12-10 04:14:59.823305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.589 [2024-12-10 04:14:59.823312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.589 [2024-12-10 04:14:59.823321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.589 [2024-12-10 04:14:59.823336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-10 04:14:59.833254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.589 [2024-12-10 04:14:59.833306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.589 [2024-12-10 04:14:59.833319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.589 [2024-12-10 04:14:59.833327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.589 [2024-12-10 04:14:59.833333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.589 [2024-12-10 04:14:59.833349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-10 04:14:59.843285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.589 [2024-12-10 04:14:59.843342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.589 [2024-12-10 04:14:59.843355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.589 [2024-12-10 04:14:59.843362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.589 [2024-12-10 04:14:59.843369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.589 [2024-12-10 04:14:59.843384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-10 04:14:59.853304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.589 [2024-12-10 04:14:59.853362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.589 [2024-12-10 04:14:59.853376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.589 [2024-12-10 04:14:59.853383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.589 [2024-12-10 04:14:59.853389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.589 [2024-12-10 04:14:59.853404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-10 04:14:59.863319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.589 [2024-12-10 04:14:59.863375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.589 [2024-12-10 04:14:59.863389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.589 [2024-12-10 04:14:59.863396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.589 [2024-12-10 04:14:59.863402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.589 [2024-12-10 04:14:59.863417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.850 [2024-12-10 04:14:59.873361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.850 [2024-12-10 04:14:59.873417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.850 [2024-12-10 04:14:59.873431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.850 [2024-12-10 04:14:59.873438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.850 [2024-12-10 04:14:59.873444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.850 [2024-12-10 04:14:59.873460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.850 qpair failed and we were unable to recover it. 00:28:00.850 [2024-12-10 04:14:59.883361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.850 [2024-12-10 04:14:59.883416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.850 [2024-12-10 04:14:59.883431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.850 [2024-12-10 04:14:59.883440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.850 [2024-12-10 04:14:59.883447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.850 [2024-12-10 04:14:59.883463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.850 qpair failed and we were unable to recover it. 00:28:00.850 [2024-12-10 04:14:59.893425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.850 [2024-12-10 04:14:59.893479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.850 [2024-12-10 04:14:59.893492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.850 [2024-12-10 04:14:59.893499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.850 [2024-12-10 04:14:59.893505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.850 [2024-12-10 04:14:59.893520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.850 qpair failed and we were unable to recover it. 00:28:00.850 [2024-12-10 04:14:59.903442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.850 [2024-12-10 04:14:59.903495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.850 [2024-12-10 04:14:59.903508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.850 [2024-12-10 04:14:59.903515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.850 [2024-12-10 04:14:59.903522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.850 [2024-12-10 04:14:59.903537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.850 qpair failed and we were unable to recover it. 00:28:00.850 [2024-12-10 04:14:59.913444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.850 [2024-12-10 04:14:59.913511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.850 [2024-12-10 04:14:59.913524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.850 [2024-12-10 04:14:59.913531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.850 [2024-12-10 04:14:59.913537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.850 [2024-12-10 04:14:59.913552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.850 qpair failed and we were unable to recover it. 00:28:00.850 [2024-12-10 04:14:59.923513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.850 [2024-12-10 04:14:59.923570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.850 [2024-12-10 04:14:59.923584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.850 [2024-12-10 04:14:59.923591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.850 [2024-12-10 04:14:59.923598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.850 [2024-12-10 04:14:59.923612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.850 qpair failed and we were unable to recover it. 00:28:00.850 [2024-12-10 04:14:59.933530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.850 [2024-12-10 04:14:59.933589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.850 [2024-12-10 04:14:59.933603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.850 [2024-12-10 04:14:59.933610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.850 [2024-12-10 04:14:59.933617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.850 [2024-12-10 04:14:59.933632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.850 qpair failed and we were unable to recover it. 00:28:00.850 [2024-12-10 04:14:59.943524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.850 [2024-12-10 04:14:59.943591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.850 [2024-12-10 04:14:59.943605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.850 [2024-12-10 04:14:59.943613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.850 [2024-12-10 04:14:59.943619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.850 [2024-12-10 04:14:59.943634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.850 qpair failed and we were unable to recover it. 00:28:00.850 [2024-12-10 04:14:59.953527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.850 [2024-12-10 04:14:59.953585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.850 [2024-12-10 04:14:59.953599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.850 [2024-12-10 04:14:59.953609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.850 [2024-12-10 04:14:59.953615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.850 [2024-12-10 04:14:59.953629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.850 qpair failed and we were unable to recover it. 00:28:00.850 [2024-12-10 04:14:59.963624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.850 [2024-12-10 04:14:59.963680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.850 [2024-12-10 04:14:59.963694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:14:59.963701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:14:59.963707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:14:59.963722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:14:59.973671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:14:59.973725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:14:59.973738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:14:59.973745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:14:59.973752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:14:59.973767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:14:59.983590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:14:59.983647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:14:59.983660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:14:59.983667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:14:59.983674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:14:59.983689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:14:59.993693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:14:59.993750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:14:59.993763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:14:59.993770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:14:59.993777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:14:59.993794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:15:00.003829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:15:00.003907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:15:00.003922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:15:00.003929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:15:00.003936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:15:00.003951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:15:00.013850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:15:00.013910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:15:00.013926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:15:00.013933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:15:00.013940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:15:00.013957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:15:00.023726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:15:00.023786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:15:00.023801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:15:00.023809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:15:00.023815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:15:00.023832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:15:00.033842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:15:00.033909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:15:00.033925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:15:00.033932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:15:00.033939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:15:00.033955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:15:00.043823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:15:00.043890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:15:00.043906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:15:00.043914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:15:00.043921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:15:00.043936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:15:00.053896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:15:00.053970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:15:00.053987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:15:00.053994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:15:00.054000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:15:00.054016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:15:00.063858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:15:00.063915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:15:00.063931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:15:00.063938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:15:00.063944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:15:00.063960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:15:00.073912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:15:00.073975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:15:00.073991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:15:00.073999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:15:00.074005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:15:00.074021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:15:00.083988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:15:00.084047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:15:00.084062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:15:00.084072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:15:00.084078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.851 [2024-12-10 04:15:00.084094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.851 qpair failed and we were unable to recover it. 00:28:00.851 [2024-12-10 04:15:00.093925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.851 [2024-12-10 04:15:00.093986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.851 [2024-12-10 04:15:00.094001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.851 [2024-12-10 04:15:00.094009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.851 [2024-12-10 04:15:00.094015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.852 [2024-12-10 04:15:00.094030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.852 qpair failed and we were unable to recover it. 00:28:00.852 [2024-12-10 04:15:00.103932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.852 [2024-12-10 04:15:00.103985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.852 [2024-12-10 04:15:00.104000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.852 [2024-12-10 04:15:00.104008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.852 [2024-12-10 04:15:00.104014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.852 [2024-12-10 04:15:00.104030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.852 qpair failed and we were unable to recover it. 00:28:00.852 [2024-12-10 04:15:00.113974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.852 [2024-12-10 04:15:00.114033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.852 [2024-12-10 04:15:00.114048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.852 [2024-12-10 04:15:00.114055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.852 [2024-12-10 04:15:00.114063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.852 [2024-12-10 04:15:00.114080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.852 qpair failed and we were unable to recover it. 00:28:00.852 [2024-12-10 04:15:00.124040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.852 [2024-12-10 04:15:00.124126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.852 [2024-12-10 04:15:00.124141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.852 [2024-12-10 04:15:00.124149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.852 [2024-12-10 04:15:00.124155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:00.852 [2024-12-10 04:15:00.124177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.852 qpair failed and we were unable to recover it. 00:28:01.112 [2024-12-10 04:15:00.134032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.112 [2024-12-10 04:15:00.134126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.112 [2024-12-10 04:15:00.134141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.112 [2024-12-10 04:15:00.134147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.112 [2024-12-10 04:15:00.134153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.112 [2024-12-10 04:15:00.134173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.112 qpair failed and we were unable to recover it. 00:28:01.112 [2024-12-10 04:15:00.144098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.112 [2024-12-10 04:15:00.144150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.112 [2024-12-10 04:15:00.144170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.112 [2024-12-10 04:15:00.144177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.112 [2024-12-10 04:15:00.144183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.112 [2024-12-10 04:15:00.144199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.112 qpair failed and we were unable to recover it. 00:28:01.112 [2024-12-10 04:15:00.154091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.112 [2024-12-10 04:15:00.154146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.112 [2024-12-10 04:15:00.154160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.112 [2024-12-10 04:15:00.154171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.112 [2024-12-10 04:15:00.154178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.112 [2024-12-10 04:15:00.154193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.112 qpair failed and we were unable to recover it. 00:28:01.112 [2024-12-10 04:15:00.164196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.112 [2024-12-10 04:15:00.164263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.112 [2024-12-10 04:15:00.164278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.112 [2024-12-10 04:15:00.164285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.112 [2024-12-10 04:15:00.164291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.112 [2024-12-10 04:15:00.164306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.112 qpair failed and we were unable to recover it. 00:28:01.112 [2024-12-10 04:15:00.174233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.112 [2024-12-10 04:15:00.174295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.112 [2024-12-10 04:15:00.174309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.112 [2024-12-10 04:15:00.174317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.112 [2024-12-10 04:15:00.174324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.112 [2024-12-10 04:15:00.174338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.112 qpair failed and we were unable to recover it. 00:28:01.112 [2024-12-10 04:15:00.184226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.112 [2024-12-10 04:15:00.184284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.112 [2024-12-10 04:15:00.184299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.112 [2024-12-10 04:15:00.184307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.112 [2024-12-10 04:15:00.184313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.112 [2024-12-10 04:15:00.184328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.112 qpair failed and we were unable to recover it. 00:28:01.112 [2024-12-10 04:15:00.194188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.112 [2024-12-10 04:15:00.194244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.112 [2024-12-10 04:15:00.194258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.112 [2024-12-10 04:15:00.194266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.112 [2024-12-10 04:15:00.194273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.112 [2024-12-10 04:15:00.194289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.112 qpair failed and we were unable to recover it. 00:28:01.112 [2024-12-10 04:15:00.204298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.112 [2024-12-10 04:15:00.204359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.112 [2024-12-10 04:15:00.204373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.112 [2024-12-10 04:15:00.204381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.112 [2024-12-10 04:15:00.204388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.112 [2024-12-10 04:15:00.204403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.112 qpair failed and we were unable to recover it. 00:28:01.112 [2024-12-10 04:15:00.214307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.112 [2024-12-10 04:15:00.214363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.112 [2024-12-10 04:15:00.214380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.112 [2024-12-10 04:15:00.214387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.112 [2024-12-10 04:15:00.214394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.112 [2024-12-10 04:15:00.214409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.112 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.224313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.224368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.224382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.224389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.224395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.224410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.234397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.234458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.234474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.234481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.234487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.234502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.244444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.244501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.244516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.244523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.244529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.244549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.254366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.254434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.254449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.254456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.254466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.254481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.264472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.264525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.264539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.264547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.264553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.264569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.274495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.274549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.274564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.274573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.274580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.274595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.284471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.284526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.284541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.284548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.284555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.284570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.294548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.294603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.294617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.294624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.294631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.294646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.304577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.304634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.304649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.304657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.304663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.304679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.314628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.314716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.314731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.314739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.314745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.314760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.324633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.324692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.324707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.324715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.324722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.324737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.334672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.334739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.334755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.334763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.334769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.334784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.344675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.344734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.113 [2024-12-10 04:15:00.344753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.113 [2024-12-10 04:15:00.344760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.113 [2024-12-10 04:15:00.344767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.113 [2024-12-10 04:15:00.344782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.113 qpair failed and we were unable to recover it. 00:28:01.113 [2024-12-10 04:15:00.354710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.113 [2024-12-10 04:15:00.354761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.114 [2024-12-10 04:15:00.354775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.114 [2024-12-10 04:15:00.354783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.114 [2024-12-10 04:15:00.354789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.114 [2024-12-10 04:15:00.354804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.114 qpair failed and we were unable to recover it. 00:28:01.114 [2024-12-10 04:15:00.364761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.114 [2024-12-10 04:15:00.364822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.114 [2024-12-10 04:15:00.364838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.114 [2024-12-10 04:15:00.364846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.114 [2024-12-10 04:15:00.364852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.114 [2024-12-10 04:15:00.364868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.114 qpair failed and we were unable to recover it. 00:28:01.114 [2024-12-10 04:15:00.374779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.114 [2024-12-10 04:15:00.374865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.114 [2024-12-10 04:15:00.374879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.114 [2024-12-10 04:15:00.374886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.114 [2024-12-10 04:15:00.374892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.114 [2024-12-10 04:15:00.374907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.114 qpair failed and we were unable to recover it. 00:28:01.114 [2024-12-10 04:15:00.384800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.114 [2024-12-10 04:15:00.384852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.114 [2024-12-10 04:15:00.384867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.114 [2024-12-10 04:15:00.384874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.114 [2024-12-10 04:15:00.384883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.114 [2024-12-10 04:15:00.384899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.114 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.394834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.378 [2024-12-10 04:15:00.394886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.378 [2024-12-10 04:15:00.394901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.378 [2024-12-10 04:15:00.394909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.378 [2024-12-10 04:15:00.394916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.378 [2024-12-10 04:15:00.394932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.378 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.404862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.378 [2024-12-10 04:15:00.404917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.378 [2024-12-10 04:15:00.404933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.378 [2024-12-10 04:15:00.404940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.378 [2024-12-10 04:15:00.404947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.378 [2024-12-10 04:15:00.404962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.378 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.414892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.378 [2024-12-10 04:15:00.414947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.378 [2024-12-10 04:15:00.414962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.378 [2024-12-10 04:15:00.414969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.378 [2024-12-10 04:15:00.414976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.378 [2024-12-10 04:15:00.414992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.378 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.424921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.378 [2024-12-10 04:15:00.424974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.378 [2024-12-10 04:15:00.424989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.378 [2024-12-10 04:15:00.424996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.378 [2024-12-10 04:15:00.425003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.378 [2024-12-10 04:15:00.425019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.378 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.434885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.378 [2024-12-10 04:15:00.434938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.378 [2024-12-10 04:15:00.434953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.378 [2024-12-10 04:15:00.434961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.378 [2024-12-10 04:15:00.434968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.378 [2024-12-10 04:15:00.434983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.378 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.444989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.378 [2024-12-10 04:15:00.445048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.378 [2024-12-10 04:15:00.445063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.378 [2024-12-10 04:15:00.445070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.378 [2024-12-10 04:15:00.445077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.378 [2024-12-10 04:15:00.445092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.378 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.454950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.378 [2024-12-10 04:15:00.455038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.378 [2024-12-10 04:15:00.455054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.378 [2024-12-10 04:15:00.455061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.378 [2024-12-10 04:15:00.455068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.378 [2024-12-10 04:15:00.455083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.378 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.465034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.378 [2024-12-10 04:15:00.465088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.378 [2024-12-10 04:15:00.465103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.378 [2024-12-10 04:15:00.465112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.378 [2024-12-10 04:15:00.465119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.378 [2024-12-10 04:15:00.465134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.378 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.475053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.378 [2024-12-10 04:15:00.475108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.378 [2024-12-10 04:15:00.475124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.378 [2024-12-10 04:15:00.475132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.378 [2024-12-10 04:15:00.475139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.378 [2024-12-10 04:15:00.475155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.378 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.485148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.378 [2024-12-10 04:15:00.485254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.378 [2024-12-10 04:15:00.485269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.378 [2024-12-10 04:15:00.485276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.378 [2024-12-10 04:15:00.485282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.378 [2024-12-10 04:15:00.485298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.378 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.495148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.378 [2024-12-10 04:15:00.495211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.378 [2024-12-10 04:15:00.495226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.378 [2024-12-10 04:15:00.495233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.378 [2024-12-10 04:15:00.495239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.378 [2024-12-10 04:15:00.495255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.378 qpair failed and we were unable to recover it. 00:28:01.378 [2024-12-10 04:15:00.505143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.505201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.505216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.505224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.505230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.505246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.515184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.515237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.515251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.515273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.515280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.515295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.525208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.525265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.525280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.525288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.525295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.525310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.535285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.535348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.535364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.535372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.535378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.535394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.545264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.545320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.545336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.545344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.545351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.545366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.555282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.555335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.555350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.555357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.555364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.555382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.565262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.565317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.565331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.565338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.565345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.565360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.575359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.575416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.575431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.575438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.575444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.575459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.585330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.585393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.585408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.585415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.585421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.585436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.595400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.595456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.595471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.595479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.595485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.595500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.605374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.605435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.605451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.605459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.605465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.605480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.615386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.615443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.615459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.615467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.615474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.615489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.625427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.625484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.625499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.379 [2024-12-10 04:15:00.625507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.379 [2024-12-10 04:15:00.625514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.379 [2024-12-10 04:15:00.625530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.379 qpair failed and we were unable to recover it. 00:28:01.379 [2024-12-10 04:15:00.635521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.379 [2024-12-10 04:15:00.635581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.379 [2024-12-10 04:15:00.635596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.380 [2024-12-10 04:15:00.635603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.380 [2024-12-10 04:15:00.635609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.380 [2024-12-10 04:15:00.635625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.380 qpair failed and we were unable to recover it. 00:28:01.380 [2024-12-10 04:15:00.645553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.380 [2024-12-10 04:15:00.645612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.380 [2024-12-10 04:15:00.645630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.380 [2024-12-10 04:15:00.645637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.380 [2024-12-10 04:15:00.645643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.380 [2024-12-10 04:15:00.645658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.380 qpair failed and we were unable to recover it. 00:28:01.380 [2024-12-10 04:15:00.655519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.380 [2024-12-10 04:15:00.655579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.380 [2024-12-10 04:15:00.655595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.380 [2024-12-10 04:15:00.655603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.380 [2024-12-10 04:15:00.655609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.380 [2024-12-10 04:15:00.655625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.380 qpair failed and we were unable to recover it. 00:28:01.663 [2024-12-10 04:15:00.665549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.663 [2024-12-10 04:15:00.665606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.663 [2024-12-10 04:15:00.665622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.663 [2024-12-10 04:15:00.665630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.663 [2024-12-10 04:15:00.665636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.663 [2024-12-10 04:15:00.665652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.663 qpair failed and we were unable to recover it. 00:28:01.663 [2024-12-10 04:15:00.675601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.663 [2024-12-10 04:15:00.675665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.663 [2024-12-10 04:15:00.675681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.663 [2024-12-10 04:15:00.675689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.663 [2024-12-10 04:15:00.675695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.663 [2024-12-10 04:15:00.675711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.663 qpair failed and we were unable to recover it. 00:28:01.663 [2024-12-10 04:15:00.685663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.663 [2024-12-10 04:15:00.685722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.663 [2024-12-10 04:15:00.685738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.663 [2024-12-10 04:15:00.685746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.663 [2024-12-10 04:15:00.685752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.663 [2024-12-10 04:15:00.685772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.663 qpair failed and we were unable to recover it. 00:28:01.663 [2024-12-10 04:15:00.695708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.663 [2024-12-10 04:15:00.695771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.663 [2024-12-10 04:15:00.695786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.663 [2024-12-10 04:15:00.695793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.663 [2024-12-10 04:15:00.695801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.663 [2024-12-10 04:15:00.695816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.663 qpair failed and we were unable to recover it. 00:28:01.663 [2024-12-10 04:15:00.705720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.663 [2024-12-10 04:15:00.705775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.663 [2024-12-10 04:15:00.705789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.663 [2024-12-10 04:15:00.705797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.663 [2024-12-10 04:15:00.705803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.663 [2024-12-10 04:15:00.705819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.663 qpair failed and we were unable to recover it. 00:28:01.663 [2024-12-10 04:15:00.715735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.663 [2024-12-10 04:15:00.715790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.663 [2024-12-10 04:15:00.715804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.663 [2024-12-10 04:15:00.715811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.715817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.715832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.725777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.725838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.725853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.725860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.725866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.725882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.735803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.735861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.735875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.735883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.735890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.735904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.745801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.745851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.745865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.745873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.745879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.745894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.755846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.755902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.755916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.755924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.755930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.755945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.765880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.765936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.765950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.765957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.765964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.765979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.775903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.775957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.775975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.775983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.775990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.776006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.785936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.786003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.786018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.786025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.786031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.786046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.795897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.795947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.795961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.795968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.795975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.795990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.805992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.806047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.806061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.806069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.806075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.806090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.816008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.816063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.816078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.816085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.816095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.816110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.826039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.826138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.826153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.826160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.826172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.826188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.836101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.836157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.836174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.836182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.836189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.836205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.664 [2024-12-10 04:15:00.846113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.664 [2024-12-10 04:15:00.846175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.664 [2024-12-10 04:15:00.846189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.664 [2024-12-10 04:15:00.846197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.664 [2024-12-10 04:15:00.846203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.664 [2024-12-10 04:15:00.846218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.664 qpair failed and we were unable to recover it. 00:28:01.665 [2024-12-10 04:15:00.856162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.665 [2024-12-10 04:15:00.856223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.665 [2024-12-10 04:15:00.856236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.665 [2024-12-10 04:15:00.856244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.665 [2024-12-10 04:15:00.856250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.665 [2024-12-10 04:15:00.856265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.665 qpair failed and we were unable to recover it. 00:28:01.665 [2024-12-10 04:15:00.866206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.665 [2024-12-10 04:15:00.866259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.665 [2024-12-10 04:15:00.866272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.665 [2024-12-10 04:15:00.866281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.665 [2024-12-10 04:15:00.866288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.665 [2024-12-10 04:15:00.866303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.665 qpair failed and we were unable to recover it. 00:28:01.665 [2024-12-10 04:15:00.876179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.665 [2024-12-10 04:15:00.876232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.665 [2024-12-10 04:15:00.876246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.665 [2024-12-10 04:15:00.876254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.665 [2024-12-10 04:15:00.876260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.665 [2024-12-10 04:15:00.876275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.665 qpair failed and we were unable to recover it. 00:28:01.665 [2024-12-10 04:15:00.886226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.665 [2024-12-10 04:15:00.886285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.665 [2024-12-10 04:15:00.886299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.665 [2024-12-10 04:15:00.886306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.665 [2024-12-10 04:15:00.886313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.665 [2024-12-10 04:15:00.886328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.665 qpair failed and we were unable to recover it. 00:28:01.665 [2024-12-10 04:15:00.896177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.665 [2024-12-10 04:15:00.896234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.665 [2024-12-10 04:15:00.896248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.665 [2024-12-10 04:15:00.896255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.665 [2024-12-10 04:15:00.896262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.665 [2024-12-10 04:15:00.896277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.665 qpair failed and we were unable to recover it. 00:28:01.665 [2024-12-10 04:15:00.906266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.665 [2024-12-10 04:15:00.906319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.665 [2024-12-10 04:15:00.906335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.665 [2024-12-10 04:15:00.906342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.665 [2024-12-10 04:15:00.906349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.665 [2024-12-10 04:15:00.906364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.665 qpair failed and we were unable to recover it. 00:28:01.665 [2024-12-10 04:15:00.916234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.665 [2024-12-10 04:15:00.916291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.665 [2024-12-10 04:15:00.916305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.665 [2024-12-10 04:15:00.916313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.665 [2024-12-10 04:15:00.916320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.665 [2024-12-10 04:15:00.916335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.665 qpair failed and we were unable to recover it. 00:28:01.665 [2024-12-10 04:15:00.926336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.665 [2024-12-10 04:15:00.926412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.665 [2024-12-10 04:15:00.926427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.665 [2024-12-10 04:15:00.926435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.665 [2024-12-10 04:15:00.926442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.665 [2024-12-10 04:15:00.926457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.665 qpair failed and we were unable to recover it. 00:28:01.665 [2024-12-10 04:15:00.936359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.665 [2024-12-10 04:15:00.936438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.665 [2024-12-10 04:15:00.936453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.665 [2024-12-10 04:15:00.936460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.665 [2024-12-10 04:15:00.936467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.665 [2024-12-10 04:15:00.936482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.665 qpair failed and we were unable to recover it. 00:28:01.936 [2024-12-10 04:15:00.946390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.936 [2024-12-10 04:15:00.946441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.936 [2024-12-10 04:15:00.946455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.936 [2024-12-10 04:15:00.946465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.936 [2024-12-10 04:15:00.946471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.936 [2024-12-10 04:15:00.946486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.936 qpair failed and we were unable to recover it. 00:28:01.936 [2024-12-10 04:15:00.956426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.936 [2024-12-10 04:15:00.956482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.936 [2024-12-10 04:15:00.956496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.936 [2024-12-10 04:15:00.956504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.936 [2024-12-10 04:15:00.956510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.936 [2024-12-10 04:15:00.956526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.936 qpair failed and we were unable to recover it. 00:28:01.936 [2024-12-10 04:15:00.966449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.936 [2024-12-10 04:15:00.966533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.936 [2024-12-10 04:15:00.966547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.936 [2024-12-10 04:15:00.966555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.936 [2024-12-10 04:15:00.966561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.936 [2024-12-10 04:15:00.966576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.936 qpair failed and we were unable to recover it. 00:28:01.936 [2024-12-10 04:15:00.976471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.936 [2024-12-10 04:15:00.976525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.936 [2024-12-10 04:15:00.976540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.936 [2024-12-10 04:15:00.976547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.936 [2024-12-10 04:15:00.976553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.936 [2024-12-10 04:15:00.976569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.936 qpair failed and we were unable to recover it. 00:28:01.936 [2024-12-10 04:15:00.986491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.936 [2024-12-10 04:15:00.986548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.936 [2024-12-10 04:15:00.986563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.936 [2024-12-10 04:15:00.986570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.936 [2024-12-10 04:15:00.986577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.936 [2024-12-10 04:15:00.986592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.936 qpair failed and we were unable to recover it. 00:28:01.936 [2024-12-10 04:15:00.996550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.936 [2024-12-10 04:15:00.996605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.936 [2024-12-10 04:15:00.996619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.936 [2024-12-10 04:15:00.996626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.936 [2024-12-10 04:15:00.996633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.936 [2024-12-10 04:15:00.996647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.936 qpair failed and we were unable to recover it. 00:28:01.936 [2024-12-10 04:15:01.006565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.936 [2024-12-10 04:15:01.006621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.936 [2024-12-10 04:15:01.006635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.936 [2024-12-10 04:15:01.006642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.936 [2024-12-10 04:15:01.006648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.936 [2024-12-10 04:15:01.006663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.936 qpair failed and we were unable to recover it. 00:28:01.936 [2024-12-10 04:15:01.016582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.936 [2024-12-10 04:15:01.016635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.936 [2024-12-10 04:15:01.016649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.936 [2024-12-10 04:15:01.016656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.936 [2024-12-10 04:15:01.016662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.936 [2024-12-10 04:15:01.016677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.936 qpair failed and we were unable to recover it. 00:28:01.936 [2024-12-10 04:15:01.026544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.936 [2024-12-10 04:15:01.026636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.936 [2024-12-10 04:15:01.026650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.936 [2024-12-10 04:15:01.026657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.936 [2024-12-10 04:15:01.026663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.936 [2024-12-10 04:15:01.026679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.936 qpair failed and we were unable to recover it. 00:28:01.936 [2024-12-10 04:15:01.036591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.936 [2024-12-10 04:15:01.036671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.936 [2024-12-10 04:15:01.036686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.036693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.036699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.036714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.046599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.046652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.046665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.046672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.046679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.046694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.056674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.056734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.056748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.056755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.056762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.056777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.066713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.066777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.066790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.066798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.066804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.066819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.076788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.076844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.076857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.076867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.076874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.076889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.086797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.086855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.086869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.086877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.086884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.086899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.096846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.096927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.096941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.096948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.096954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.096969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.106893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.106956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.106970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.106978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.106984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.106999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.116931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.116989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.117004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.117012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.117019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.117037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.126916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.126974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.126988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.126995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.127002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.127018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.136870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.136923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.136937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.136944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.136951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.136966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.146970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.147026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.147040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.147047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.147053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.147068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.156986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.157044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.157057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.157064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.157070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.937 [2024-12-10 04:15:01.157086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.937 qpair failed and we were unable to recover it. 00:28:01.937 [2024-12-10 04:15:01.167057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.937 [2024-12-10 04:15:01.167120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.937 [2024-12-10 04:15:01.167133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.937 [2024-12-10 04:15:01.167141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.937 [2024-12-10 04:15:01.167147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.938 [2024-12-10 04:15:01.167162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.938 qpair failed and we were unable to recover it. 00:28:01.938 [2024-12-10 04:15:01.177056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.938 [2024-12-10 04:15:01.177110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.938 [2024-12-10 04:15:01.177123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.938 [2024-12-10 04:15:01.177130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.938 [2024-12-10 04:15:01.177137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.938 [2024-12-10 04:15:01.177152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.938 qpair failed and we were unable to recover it. 00:28:01.938 [2024-12-10 04:15:01.187067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.938 [2024-12-10 04:15:01.187131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.938 [2024-12-10 04:15:01.187145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.938 [2024-12-10 04:15:01.187152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.938 [2024-12-10 04:15:01.187158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.938 [2024-12-10 04:15:01.187177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.938 qpair failed and we were unable to recover it. 00:28:01.938 [2024-12-10 04:15:01.197109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.938 [2024-12-10 04:15:01.197172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.938 [2024-12-10 04:15:01.197186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.938 [2024-12-10 04:15:01.197193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.938 [2024-12-10 04:15:01.197199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.938 [2024-12-10 04:15:01.197214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.938 qpair failed and we were unable to recover it. 00:28:01.938 [2024-12-10 04:15:01.207139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.938 [2024-12-10 04:15:01.207201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.938 [2024-12-10 04:15:01.207217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.938 [2024-12-10 04:15:01.207225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.938 [2024-12-10 04:15:01.207231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:01.938 [2024-12-10 04:15:01.207247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.938 qpair failed and we were unable to recover it. 00:28:01.938 [2024-12-10 04:15:01.217123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.938 [2024-12-10 04:15:01.217178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.938 [2024-12-10 04:15:01.217191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.938 [2024-12-10 04:15:01.217199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.938 [2024-12-10 04:15:01.217206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.198 [2024-12-10 04:15:01.217221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-12-10 04:15:01.227204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.198 [2024-12-10 04:15:01.227261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.198 [2024-12-10 04:15:01.227275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.198 [2024-12-10 04:15:01.227283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.198 [2024-12-10 04:15:01.227289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.198 [2024-12-10 04:15:01.227305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-12-10 04:15:01.237253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.198 [2024-12-10 04:15:01.237310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.198 [2024-12-10 04:15:01.237324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.198 [2024-12-10 04:15:01.237331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.198 [2024-12-10 04:15:01.237337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.198 [2024-12-10 04:15:01.237352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.247329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.247385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.247398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.247405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.247411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.247431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.257256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.257317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.257331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.257338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.257344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.257359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.267322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.267407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.267421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.267428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.267435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.267450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.277344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.277400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.277414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.277424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.277430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.277446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.287374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.287455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.287470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.287477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.287484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.287498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.297386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.297435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.297449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.297456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.297463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.297478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.307405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.307470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.307483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.307490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.307496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.307512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.317384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.317434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.317448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.317455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.317461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.317476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.327431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.327504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.327518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.327525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.327531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.327548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.337502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.337585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.337602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.337609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.337615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.337630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.347485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.347567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.347580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.347588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.347594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.347609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.357574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.357629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.357643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.357650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.357657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.357671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.367527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.199 [2024-12-10 04:15:01.367588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.199 [2024-12-10 04:15:01.367602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.199 [2024-12-10 04:15:01.367609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.199 [2024-12-10 04:15:01.367616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.199 [2024-12-10 04:15:01.367632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-12-10 04:15:01.377550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.200 [2024-12-10 04:15:01.377614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.200 [2024-12-10 04:15:01.377628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.200 [2024-12-10 04:15:01.377635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.200 [2024-12-10 04:15:01.377644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.200 [2024-12-10 04:15:01.377659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-12-10 04:15:01.387642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.200 [2024-12-10 04:15:01.387704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.200 [2024-12-10 04:15:01.387717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.200 [2024-12-10 04:15:01.387724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.200 [2024-12-10 04:15:01.387730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.200 [2024-12-10 04:15:01.387745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-12-10 04:15:01.397707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.200 [2024-12-10 04:15:01.397766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.200 [2024-12-10 04:15:01.397779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.200 [2024-12-10 04:15:01.397785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.200 [2024-12-10 04:15:01.397791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.200 [2024-12-10 04:15:01.397806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-12-10 04:15:01.407767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.200 [2024-12-10 04:15:01.407826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.200 [2024-12-10 04:15:01.407840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.200 [2024-12-10 04:15:01.407847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.200 [2024-12-10 04:15:01.407853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.200 [2024-12-10 04:15:01.407867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-12-10 04:15:01.417754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.200 [2024-12-10 04:15:01.417813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.200 [2024-12-10 04:15:01.417828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.200 [2024-12-10 04:15:01.417835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.200 [2024-12-10 04:15:01.417842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.200 [2024-12-10 04:15:01.417858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-12-10 04:15:01.427736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.200 [2024-12-10 04:15:01.427791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.200 [2024-12-10 04:15:01.427805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.200 [2024-12-10 04:15:01.427812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.200 [2024-12-10 04:15:01.427819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.200 [2024-12-10 04:15:01.427834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-12-10 04:15:01.437793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.200 [2024-12-10 04:15:01.437880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.200 [2024-12-10 04:15:01.437894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.200 [2024-12-10 04:15:01.437902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.200 [2024-12-10 04:15:01.437909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.200 [2024-12-10 04:15:01.437924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-12-10 04:15:01.447756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.200 [2024-12-10 04:15:01.447814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.200 [2024-12-10 04:15:01.447828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.200 [2024-12-10 04:15:01.447835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.200 [2024-12-10 04:15:01.447842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.200 [2024-12-10 04:15:01.447857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-12-10 04:15:01.457868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.200 [2024-12-10 04:15:01.457967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.200 [2024-12-10 04:15:01.457981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.200 [2024-12-10 04:15:01.457988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.200 [2024-12-10 04:15:01.457994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.200 [2024-12-10 04:15:01.458009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-12-10 04:15:01.467824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.200 [2024-12-10 04:15:01.467879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.200 [2024-12-10 04:15:01.467895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.200 [2024-12-10 04:15:01.467903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.200 [2024-12-10 04:15:01.467910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.200 [2024-12-10 04:15:01.467925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-12-10 04:15:01.477932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.200 [2024-12-10 04:15:01.478016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.200 [2024-12-10 04:15:01.478029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.200 [2024-12-10 04:15:01.478036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.200 [2024-12-10 04:15:01.478042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.200 [2024-12-10 04:15:01.478057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.461 [2024-12-10 04:15:01.487982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.461 [2024-12-10 04:15:01.488045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.461 [2024-12-10 04:15:01.488059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.461 [2024-12-10 04:15:01.488066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.461 [2024-12-10 04:15:01.488073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.461 [2024-12-10 04:15:01.488088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-12-10 04:15:01.497894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.461 [2024-12-10 04:15:01.497950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.461 [2024-12-10 04:15:01.497963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.461 [2024-12-10 04:15:01.497971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.461 [2024-12-10 04:15:01.497978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.461 [2024-12-10 04:15:01.497992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-12-10 04:15:01.507964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.461 [2024-12-10 04:15:01.508052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.461 [2024-12-10 04:15:01.508066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.461 [2024-12-10 04:15:01.508078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.461 [2024-12-10 04:15:01.508084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.461 [2024-12-10 04:15:01.508100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-12-10 04:15:01.517995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.461 [2024-12-10 04:15:01.518049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.461 [2024-12-10 04:15:01.518063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.461 [2024-12-10 04:15:01.518070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.461 [2024-12-10 04:15:01.518076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.461 [2024-12-10 04:15:01.518092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-12-10 04:15:01.527975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.461 [2024-12-10 04:15:01.528032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.461 [2024-12-10 04:15:01.528046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.461 [2024-12-10 04:15:01.528053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.461 [2024-12-10 04:15:01.528059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.461 [2024-12-10 04:15:01.528074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-12-10 04:15:01.538047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.461 [2024-12-10 04:15:01.538112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.461 [2024-12-10 04:15:01.538125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.461 [2024-12-10 04:15:01.538133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.461 [2024-12-10 04:15:01.538139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.538154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.548097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.548153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.548170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.548178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.548184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.548200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.558054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.558109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.558123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.558130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.558136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.558151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.568162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.568225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.568239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.568246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.568252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.568267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.578184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.578242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.578256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.578263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.578269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.578285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.588195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.588253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.588267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.588275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.588281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.588296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.598233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.598291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.598305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.598313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.598319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.598334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.608273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.608328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.608342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.608349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.608355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.608369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.618330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.618391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.618405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.618412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.618418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.618433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.628323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.628374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.628388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.628394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.628401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.628415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.638386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.638435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.638449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.638459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.638466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.638480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.648393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.648468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.648481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.648489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.648495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.648510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.658385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.462 [2024-12-10 04:15:01.658446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.462 [2024-12-10 04:15:01.658468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.462 [2024-12-10 04:15:01.658476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.462 [2024-12-10 04:15:01.658482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.462 [2024-12-10 04:15:01.658502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-12-10 04:15:01.668388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.463 [2024-12-10 04:15:01.668443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.463 [2024-12-10 04:15:01.668457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.463 [2024-12-10 04:15:01.668464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.463 [2024-12-10 04:15:01.668471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.463 [2024-12-10 04:15:01.668486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-12-10 04:15:01.678509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.463 [2024-12-10 04:15:01.678569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.463 [2024-12-10 04:15:01.678583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.463 [2024-12-10 04:15:01.678590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.463 [2024-12-10 04:15:01.678597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.463 [2024-12-10 04:15:01.678616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-12-10 04:15:01.688515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.463 [2024-12-10 04:15:01.688578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.463 [2024-12-10 04:15:01.688591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.463 [2024-12-10 04:15:01.688598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.463 [2024-12-10 04:15:01.688605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.463 [2024-12-10 04:15:01.688619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-12-10 04:15:01.698558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.463 [2024-12-10 04:15:01.698612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.463 [2024-12-10 04:15:01.698626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.463 [2024-12-10 04:15:01.698633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.463 [2024-12-10 04:15:01.698639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.463 [2024-12-10 04:15:01.698653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-12-10 04:15:01.708559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.463 [2024-12-10 04:15:01.708618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.463 [2024-12-10 04:15:01.708631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.463 [2024-12-10 04:15:01.708638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.463 [2024-12-10 04:15:01.708645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.463 [2024-12-10 04:15:01.708660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-12-10 04:15:01.718590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.463 [2024-12-10 04:15:01.718645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.463 [2024-12-10 04:15:01.718659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.463 [2024-12-10 04:15:01.718666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.463 [2024-12-10 04:15:01.718672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.463 [2024-12-10 04:15:01.718688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-12-10 04:15:01.728562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.463 [2024-12-10 04:15:01.728620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.463 [2024-12-10 04:15:01.728633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.463 [2024-12-10 04:15:01.728640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.463 [2024-12-10 04:15:01.728646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.463 [2024-12-10 04:15:01.728661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-12-10 04:15:01.738613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.463 [2024-12-10 04:15:01.738681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.463 [2024-12-10 04:15:01.738694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.463 [2024-12-10 04:15:01.738702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.463 [2024-12-10 04:15:01.738709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.463 [2024-12-10 04:15:01.738724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.723 [2024-12-10 04:15:01.748690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.723 [2024-12-10 04:15:01.748745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.723 [2024-12-10 04:15:01.748758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.723 [2024-12-10 04:15:01.748765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.723 [2024-12-10 04:15:01.748772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.723 [2024-12-10 04:15:01.748786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.723 qpair failed and we were unable to recover it. 00:28:02.723 [2024-12-10 04:15:01.758700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.723 [2024-12-10 04:15:01.758756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.723 [2024-12-10 04:15:01.758770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.723 [2024-12-10 04:15:01.758776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.723 [2024-12-10 04:15:01.758783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.723 [2024-12-10 04:15:01.758798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.723 qpair failed and we were unable to recover it. 00:28:02.723 [2024-12-10 04:15:01.768744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.723 [2024-12-10 04:15:01.768801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.723 [2024-12-10 04:15:01.768817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.723 [2024-12-10 04:15:01.768824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.723 [2024-12-10 04:15:01.768831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.723 [2024-12-10 04:15:01.768845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.723 qpair failed and we were unable to recover it. 00:28:02.723 [2024-12-10 04:15:01.778771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.723 [2024-12-10 04:15:01.778829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.723 [2024-12-10 04:15:01.778842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.723 [2024-12-10 04:15:01.778849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.723 [2024-12-10 04:15:01.778856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.723 [2024-12-10 04:15:01.778871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.723 qpair failed and we were unable to recover it. 00:28:02.723 [2024-12-10 04:15:01.788778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.723 [2024-12-10 04:15:01.788832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.723 [2024-12-10 04:15:01.788846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.723 [2024-12-10 04:15:01.788853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.723 [2024-12-10 04:15:01.788859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.723 [2024-12-10 04:15:01.788874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.723 qpair failed and we were unable to recover it. 00:28:02.723 [2024-12-10 04:15:01.798748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.723 [2024-12-10 04:15:01.798809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.723 [2024-12-10 04:15:01.798823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.723 [2024-12-10 04:15:01.798830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.723 [2024-12-10 04:15:01.798836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.723 [2024-12-10 04:15:01.798851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.723 qpair failed and we were unable to recover it. 00:28:02.723 [2024-12-10 04:15:01.808789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.723 [2024-12-10 04:15:01.808847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.723 [2024-12-10 04:15:01.808861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.724 [2024-12-10 04:15:01.808868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.724 [2024-12-10 04:15:01.808878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.724 [2024-12-10 04:15:01.808893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.724 qpair failed and we were unable to recover it. 00:28:02.724 [2024-12-10 04:15:01.818910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.724 [2024-12-10 04:15:01.818972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.724 [2024-12-10 04:15:01.818986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.724 [2024-12-10 04:15:01.818994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.724 [2024-12-10 04:15:01.819001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.724 [2024-12-10 04:15:01.819016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.724 qpair failed and we were unable to recover it. 00:28:02.724 [2024-12-10 04:15:01.828909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.724 [2024-12-10 04:15:01.828967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.724 [2024-12-10 04:15:01.828982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.724 [2024-12-10 04:15:01.828989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.724 [2024-12-10 04:15:01.828995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.724 [2024-12-10 04:15:01.829011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.724 qpair failed and we were unable to recover it. 00:28:02.724 [2024-12-10 04:15:01.838943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.724 [2024-12-10 04:15:01.839017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.724 [2024-12-10 04:15:01.839032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.724 [2024-12-10 04:15:01.839038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.724 [2024-12-10 04:15:01.839044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.724 [2024-12-10 04:15:01.839059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.724 qpair failed and we were unable to recover it. 00:28:02.724 [2024-12-10 04:15:01.849010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.724 [2024-12-10 04:15:01.849071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.724 [2024-12-10 04:15:01.849085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.724 [2024-12-10 04:15:01.849092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.724 [2024-12-10 04:15:01.849098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.724 [2024-12-10 04:15:01.849113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.724 qpair failed and we were unable to recover it. 00:28:02.724 [2024-12-10 04:15:01.858970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.724 [2024-12-10 04:15:01.859034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.724 [2024-12-10 04:15:01.859049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.724 [2024-12-10 04:15:01.859056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.724 [2024-12-10 04:15:01.859062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f23d8000b90 00:28:02.724 [2024-12-10 04:15:01.859078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.724 qpair failed and we were unable to recover it. 00:28:02.724 [2024-12-10 04:15:01.859213] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:02.724 A controller has encountered a failure and is being reset. 00:28:02.724 Controller properly reset. 00:28:02.724 Initializing NVMe Controllers 00:28:02.724 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:02.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:02.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:02.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:02.724 Initialization complete. Launching workers. 00:28:02.724 Starting thread on core 1 00:28:02.724 Starting thread on core 2 00:28:02.724 Starting thread on core 3 00:28:02.724 Starting thread on core 0 00:28:02.724 04:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:02.724 00:28:02.724 real 0m11.419s 00:28:02.724 user 0m21.829s 00:28:02.724 sys 0m4.846s 00:28:02.724 04:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:02.724 04:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.724 ************************************ 00:28:02.724 END TEST nvmf_target_disconnect_tc2 00:28:02.724 ************************************ 00:28:02.724 04:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:02.724 04:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:02.724 04:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:02.724 04:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:02.724 04:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:02.724 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.724 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:02.724 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.724 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.983 rmmod nvme_tcp 00:28:02.983 rmmod nvme_fabrics 00:28:02.983 rmmod nvme_keyring 00:28:02.983 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.983 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:02.983 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:02.983 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 216620 ']' 00:28:02.983 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 216620 00:28:02.983 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 216620 ']' 00:28:02.984 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 216620 00:28:02.984 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:02.984 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.984 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216620 00:28:02.984 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:02.984 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:02.984 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216620' 00:28:02.984 killing process with pid 216620 00:28:02.984 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 216620 00:28:02.984 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 216620 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.243 04:15:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.151 04:15:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:05.151 00:28:05.151 real 0m20.150s 00:28:05.151 user 0m49.476s 00:28:05.151 sys 0m9.764s 00:28:05.151 04:15:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:05.151 04:15:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:05.151 ************************************ 00:28:05.151 END TEST nvmf_target_disconnect 00:28:05.151 ************************************ 00:28:05.151 04:15:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:05.151 00:28:05.151 real 5m49.036s 00:28:05.151 user 10m27.902s 00:28:05.151 sys 1m58.328s 00:28:05.151 04:15:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:05.151 04:15:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.151 ************************************ 00:28:05.151 END TEST nvmf_host 00:28:05.151 ************************************ 00:28:05.411 04:15:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:05.411 04:15:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:05.411 04:15:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:05.411 04:15:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:05.411 04:15:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:05.411 04:15:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:05.411 ************************************ 00:28:05.411 START TEST nvmf_target_core_interrupt_mode 00:28:05.411 ************************************ 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:05.411 * Looking for test storage... 00:28:05.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:05.411 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:05.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.412 --rc genhtml_branch_coverage=1 00:28:05.412 --rc genhtml_function_coverage=1 00:28:05.412 --rc genhtml_legend=1 00:28:05.412 --rc geninfo_all_blocks=1 00:28:05.412 --rc geninfo_unexecuted_blocks=1 00:28:05.412 00:28:05.412 ' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:05.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.412 --rc genhtml_branch_coverage=1 00:28:05.412 --rc genhtml_function_coverage=1 00:28:05.412 --rc genhtml_legend=1 00:28:05.412 --rc geninfo_all_blocks=1 00:28:05.412 --rc geninfo_unexecuted_blocks=1 00:28:05.412 00:28:05.412 ' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:05.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.412 --rc genhtml_branch_coverage=1 00:28:05.412 --rc genhtml_function_coverage=1 00:28:05.412 --rc genhtml_legend=1 00:28:05.412 --rc geninfo_all_blocks=1 00:28:05.412 --rc geninfo_unexecuted_blocks=1 00:28:05.412 00:28:05.412 ' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:05.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.412 --rc genhtml_branch_coverage=1 00:28:05.412 --rc genhtml_function_coverage=1 00:28:05.412 --rc genhtml_legend=1 00:28:05.412 --rc geninfo_all_blocks=1 00:28:05.412 --rc geninfo_unexecuted_blocks=1 00:28:05.412 00:28:05.412 ' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:05.412 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:05.672 ************************************ 00:28:05.672 START TEST nvmf_abort 00:28:05.672 ************************************ 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:05.672 * Looking for test storage... 00:28:05.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:05.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.672 --rc genhtml_branch_coverage=1 00:28:05.672 --rc genhtml_function_coverage=1 00:28:05.672 --rc genhtml_legend=1 00:28:05.672 --rc geninfo_all_blocks=1 00:28:05.672 --rc geninfo_unexecuted_blocks=1 00:28:05.672 00:28:05.672 ' 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:05.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.672 --rc genhtml_branch_coverage=1 00:28:05.672 --rc genhtml_function_coverage=1 00:28:05.672 --rc genhtml_legend=1 00:28:05.672 --rc geninfo_all_blocks=1 00:28:05.672 --rc geninfo_unexecuted_blocks=1 00:28:05.672 00:28:05.672 ' 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:05.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.672 --rc genhtml_branch_coverage=1 00:28:05.672 --rc genhtml_function_coverage=1 00:28:05.672 --rc genhtml_legend=1 00:28:05.672 --rc geninfo_all_blocks=1 00:28:05.672 --rc geninfo_unexecuted_blocks=1 00:28:05.672 00:28:05.672 ' 00:28:05.672 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:05.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.672 --rc genhtml_branch_coverage=1 00:28:05.673 --rc genhtml_function_coverage=1 00:28:05.673 --rc genhtml_legend=1 00:28:05.673 --rc geninfo_all_blocks=1 00:28:05.673 --rc geninfo_unexecuted_blocks=1 00:28:05.673 00:28:05.673 ' 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:05.673 04:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:12.246 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:12.246 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:12.246 Found net devices under 0000:af:00.0: cvl_0_0 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:12.246 Found net devices under 0000:af:00.1: cvl_0_1 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.246 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:28:12.247 00:28:12.247 --- 10.0.0.2 ping statistics --- 00:28:12.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.247 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:28:12.247 00:28:12.247 --- 10.0.0.1 ping statistics --- 00:28:12.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.247 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=221795 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 221795 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 221795 ']' 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.247 04:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.247 [2024-12-10 04:15:10.992602] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:12.247 [2024-12-10 04:15:10.993483] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:28:12.247 [2024-12-10 04:15:10.993514] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.247 [2024-12-10 04:15:11.068812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:12.247 [2024-12-10 04:15:11.110011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.247 [2024-12-10 04:15:11.110042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.247 [2024-12-10 04:15:11.110049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.247 [2024-12-10 04:15:11.110055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.247 [2024-12-10 04:15:11.110061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.247 [2024-12-10 04:15:11.111365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.247 [2024-12-10 04:15:11.111387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.247 [2024-12-10 04:15:11.111389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.247 [2024-12-10 04:15:11.179363] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:12.247 [2024-12-10 04:15:11.179873] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:12.247 [2024-12-10 04:15:11.180085] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:12.247 [2024-12-10 04:15:11.180283] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.247 [2024-12-10 04:15:11.260198] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.247 Malloc0 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.247 Delay0 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.247 [2024-12-10 04:15:11.344147] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.247 04:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:12.247 [2024-12-10 04:15:11.431057] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:14.787 Initializing NVMe Controllers 00:28:14.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:14.787 controller IO queue size 128 less than required 00:28:14.787 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:14.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:14.787 Initialization complete. Launching workers. 00:28:14.787 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37830 00:28:14.788 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37891, failed to submit 66 00:28:14.788 success 37830, unsuccessful 61, failed 0 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:14.788 rmmod nvme_tcp 00:28:14.788 rmmod nvme_fabrics 00:28:14.788 rmmod nvme_keyring 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 221795 ']' 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 221795 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 221795 ']' 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 221795 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221795 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221795' 00:28:14.788 killing process with pid 221795 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 221795 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 221795 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.788 04:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.694 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:16.694 00:28:16.694 real 0m11.153s 00:28:16.694 user 0m10.172s 00:28:16.694 sys 0m5.651s 00:28:16.694 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.694 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:16.694 ************************************ 00:28:16.694 END TEST nvmf_abort 00:28:16.694 ************************************ 00:28:16.694 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:16.694 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:16.694 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.694 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:16.694 ************************************ 00:28:16.694 START TEST nvmf_ns_hotplug_stress 00:28:16.694 ************************************ 00:28:16.694 04:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:16.954 * Looking for test storage... 00:28:16.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:16.954 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:16.954 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:16.954 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:16.954 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:16.954 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.954 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.954 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.954 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.954 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:16.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.955 --rc genhtml_branch_coverage=1 00:28:16.955 --rc genhtml_function_coverage=1 00:28:16.955 --rc genhtml_legend=1 00:28:16.955 --rc geninfo_all_blocks=1 00:28:16.955 --rc geninfo_unexecuted_blocks=1 00:28:16.955 00:28:16.955 ' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:16.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.955 --rc genhtml_branch_coverage=1 00:28:16.955 --rc genhtml_function_coverage=1 00:28:16.955 --rc genhtml_legend=1 00:28:16.955 --rc geninfo_all_blocks=1 00:28:16.955 --rc geninfo_unexecuted_blocks=1 00:28:16.955 00:28:16.955 ' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:16.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.955 --rc genhtml_branch_coverage=1 00:28:16.955 --rc genhtml_function_coverage=1 00:28:16.955 --rc genhtml_legend=1 00:28:16.955 --rc geninfo_all_blocks=1 00:28:16.955 --rc geninfo_unexecuted_blocks=1 00:28:16.955 00:28:16.955 ' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:16.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.955 --rc genhtml_branch_coverage=1 00:28:16.955 --rc genhtml_function_coverage=1 00:28:16.955 --rc genhtml_legend=1 00:28:16.955 --rc geninfo_all_blocks=1 00:28:16.955 --rc geninfo_unexecuted_blocks=1 00:28:16.955 00:28:16.955 ' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:16.955 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.956 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:16.956 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:16.956 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:16.956 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.956 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.956 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.956 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:16.956 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:16.956 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.956 04:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:23.530 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:23.530 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:23.530 Found net devices under 0000:af:00.0: cvl_0_0 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.530 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:23.531 Found net devices under 0000:af:00.1: cvl_0_1 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:23.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:28:23.531 00:28:23.531 --- 10.0.0.2 ping statistics --- 00:28:23.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.531 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:28:23.531 00:28:23.531 --- 10.0.0.1 ping statistics --- 00:28:23.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.531 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:23.531 04:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=225709 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 225709 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 225709 ']' 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.531 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 [2024-12-10 04:15:22.085696] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:23.531 [2024-12-10 04:15:22.086580] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:28:23.531 [2024-12-10 04:15:22.086612] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.531 [2024-12-10 04:15:22.168969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:23.531 [2024-12-10 04:15:22.212297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.531 [2024-12-10 04:15:22.212329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.531 [2024-12-10 04:15:22.212336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.531 [2024-12-10 04:15:22.212346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.531 [2024-12-10 04:15:22.212352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.531 [2024-12-10 04:15:22.213528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.531 [2024-12-10 04:15:22.213546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:23.531 [2024-12-10 04:15:22.213551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.531 [2024-12-10 04:15:22.281668] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:23.531 [2024-12-10 04:15:22.282114] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:23.531 [2024-12-10 04:15:22.282351] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:23.531 [2024-12-10 04:15:22.282567] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:23.791 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.791 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:23.791 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:23.791 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:23.791 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:23.791 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.791 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:23.791 04:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:24.050 [2024-12-10 04:15:23.130458] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.050 04:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:24.310 04:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.310 [2024-12-10 04:15:23.538930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.310 04:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:24.569 04:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:24.829 Malloc0 00:28:24.829 04:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:25.088 Delay0 00:28:25.088 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.347 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:25.347 NULL1 00:28:25.347 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:25.606 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:25.606 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=226185 00:28:25.606 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:25.606 04:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.984 Read completed with error (sct=0, sc=11) 00:28:26.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.984 04:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.984 04:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:26.984 04:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:27.243 true 00:28:27.243 04:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:27.243 04:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.179 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.179 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:28.180 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:28.437 true 00:28:28.437 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:28.437 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.695 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.695 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:28.695 04:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:28.954 true 00:28:28.955 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:28.955 04:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.151 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.151 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:30.152 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:30.410 true 00:28:30.410 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:30.410 04:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.346 04:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.346 04:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:31.346 04:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:31.604 true 00:28:31.604 04:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:31.604 04:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.863 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.122 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:32.122 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:32.122 true 00:28:32.122 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:32.122 04:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.500 04:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.500 04:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:33.500 04:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:33.759 true 00:28:33.759 04:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:33.759 04:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.697 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.697 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:34.697 04:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:34.956 true 00:28:34.956 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:34.956 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.215 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.474 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:35.474 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:35.474 true 00:28:35.733 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:35.733 04:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.669 04:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.928 04:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:36.928 04:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:37.187 true 00:28:37.187 04:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:37.187 04:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.124 04:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.124 04:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:38.124 04:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:38.383 true 00:28:38.383 04:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:38.383 04:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.642 04:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.901 04:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:38.901 04:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:38.901 true 00:28:38.901 04:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:38.901 04:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.279 04:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.279 04:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:40.279 04:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:40.538 true 00:28:40.538 04:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:40.538 04:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.475 04:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.475 04:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:41.475 04:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:41.734 true 00:28:41.734 04:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:41.734 04:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.993 04:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.993 04:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:41.993 04:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:42.252 true 00:28:42.252 04:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:42.252 04:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.631 04:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.631 04:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:43.631 04:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:43.890 true 00:28:43.890 04:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:43.890 04:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.828 04:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.828 04:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:44.828 04:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:45.087 true 00:28:45.087 04:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:45.087 04:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.345 04:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.606 04:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:45.606 04:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:45.606 true 00:28:45.606 04:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:45.606 04:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:46.801 04:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:46.801 04:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:46.801 04:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:47.060 true 00:28:47.060 04:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:47.060 04:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.319 04:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.578 04:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:47.578 04:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:47.578 true 00:28:47.578 04:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:47.578 04:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.956 04:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.956 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:48.956 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:49.215 true 00:28:49.215 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:49.215 04:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.152 04:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.152 04:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:50.152 04:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:50.411 true 00:28:50.411 04:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:50.411 04:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.669 04:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.928 04:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:50.928 04:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:50.928 true 00:28:50.928 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:50.928 04:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.305 04:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.305 04:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:52.305 04:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:52.564 true 00:28:52.564 04:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:52.564 04:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.501 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.501 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:53.501 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:53.760 true 00:28:53.760 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:53.760 04:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.019 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.019 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:54.019 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:54.277 true 00:28:54.277 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:54.277 04:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:55.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.653 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:55.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.653 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:55.653 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:55.912 true 00:28:55.912 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:55.912 04:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.847 04:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:56.847 Initializing NVMe Controllers 00:28:56.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:56.848 Controller IO queue size 128, less than required. 00:28:56.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.848 Controller IO queue size 128, less than required. 00:28:56.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:56.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:56.848 Initialization complete. Launching workers. 00:28:56.848 ======================================================== 00:28:56.848 Latency(us) 00:28:56.848 Device Information : IOPS MiB/s Average min max 00:28:56.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2265.00 1.11 41192.75 2733.44 1201856.71 00:28:56.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18707.30 9.13 6842.27 1689.23 370256.10 00:28:56.848 ======================================================== 00:28:56.848 Total : 20972.30 10.24 10552.11 1689.23 1201856.71 00:28:56.848 00:28:56.848 04:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:56.848 04:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:57.106 true 00:28:57.106 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 226185 00:28:57.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (226185) - No such process 00:28:57.106 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 226185 00:28:57.106 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.106 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:57.364 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:57.364 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:57.364 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:57.364 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:57.364 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:57.623 null0 00:28:57.623 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:57.623 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:57.623 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:57.922 null1 00:28:57.922 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:57.922 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:57.922 04:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:57.922 null2 00:28:57.922 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:57.922 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:57.922 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:58.263 null3 00:28:58.263 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:58.263 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:58.263 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:58.263 null4 00:28:58.263 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:58.263 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:58.263 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:58.556 null5 00:28:58.556 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:58.556 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:58.556 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:58.816 null6 00:28:58.816 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:58.816 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:58.816 04:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:58.816 null7 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:58.816 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 231388 231389 231391 231393 231395 231397 231399 231401 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.817 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:59.076 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.076 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:59.076 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:59.076 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:59.076 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:59.076 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:59.076 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:59.076 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:59.334 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.335 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:59.593 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:59.593 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:59.593 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:59.593 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:59.593 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:59.593 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.593 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:59.593 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.852 04:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:59.852 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:59.852 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:59.852 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.852 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:59.852 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.112 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:00.371 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:00.371 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.371 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:00.371 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:00.371 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:00.371 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:00.371 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:00.371 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.630 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:00.889 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:00.889 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:00.889 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:00.889 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:00.889 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:00.889 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.889 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:00.889 04:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:00.889 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.889 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.889 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.148 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.407 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.408 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:01.666 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:01.666 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:01.666 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:01.666 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:01.666 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:01.666 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.666 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:01.666 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.925 04:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:01.925 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.184 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:02.443 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:02.443 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.443 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:02.443 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:02.443 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:02.443 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:02.443 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:02.443 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.701 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:02.702 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.702 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.702 04:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.960 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:03.220 rmmod nvme_tcp 00:29:03.220 rmmod nvme_fabrics 00:29:03.220 rmmod nvme_keyring 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 225709 ']' 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 225709 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 225709 ']' 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 225709 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 225709 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 225709' 00:29:03.220 killing process with pid 225709 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 225709 00:29:03.220 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 225709 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.480 04:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.384 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.384 00:29:05.384 real 0m48.684s 00:29:05.384 user 3m0.326s 00:29:05.384 sys 0m19.373s 00:29:05.384 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.384 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:05.384 ************************************ 00:29:05.384 END TEST nvmf_ns_hotplug_stress 00:29:05.384 ************************************ 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:05.644 ************************************ 00:29:05.644 START TEST nvmf_delete_subsystem 00:29:05.644 ************************************ 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:05.644 * Looking for test storage... 00:29:05.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:05.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.644 --rc genhtml_branch_coverage=1 00:29:05.644 --rc genhtml_function_coverage=1 00:29:05.644 --rc genhtml_legend=1 00:29:05.644 --rc geninfo_all_blocks=1 00:29:05.644 --rc geninfo_unexecuted_blocks=1 00:29:05.644 00:29:05.644 ' 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:05.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.644 --rc genhtml_branch_coverage=1 00:29:05.644 --rc genhtml_function_coverage=1 00:29:05.644 --rc genhtml_legend=1 00:29:05.644 --rc geninfo_all_blocks=1 00:29:05.644 --rc geninfo_unexecuted_blocks=1 00:29:05.644 00:29:05.644 ' 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:05.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.644 --rc genhtml_branch_coverage=1 00:29:05.644 --rc genhtml_function_coverage=1 00:29:05.644 --rc genhtml_legend=1 00:29:05.644 --rc geninfo_all_blocks=1 00:29:05.644 --rc geninfo_unexecuted_blocks=1 00:29:05.644 00:29:05.644 ' 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:05.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.644 --rc genhtml_branch_coverage=1 00:29:05.644 --rc genhtml_function_coverage=1 00:29:05.644 --rc genhtml_legend=1 00:29:05.644 --rc geninfo_all_blocks=1 00:29:05.644 --rc geninfo_unexecuted_blocks=1 00:29:05.644 00:29:05.644 ' 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.644 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.645 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.904 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:05.904 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:05.904 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.904 04:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.471 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:12.472 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:12.472 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:12.472 Found net devices under 0000:af:00.0: cvl_0_0 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:12.472 Found net devices under 0000:af:00.1: cvl_0_1 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:12.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:29:12.472 00:29:12.472 --- 10.0.0.2 ping statistics --- 00:29:12.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.472 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:29:12.472 00:29:12.472 --- 10.0.0.1 ping statistics --- 00:29:12.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.472 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:12.472 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:12.473 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.473 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=235689 00:29:12.473 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:12.473 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 235689 00:29:12.473 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 235689 ']' 00:29:12.473 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.473 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.473 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.473 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.473 04:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 [2024-12-10 04:16:10.913090] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:12.473 [2024-12-10 04:16:10.913940] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:12.473 [2024-12-10 04:16:10.913971] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.473 [2024-12-10 04:16:10.976456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:12.473 [2024-12-10 04:16:11.017429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.473 [2024-12-10 04:16:11.017463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.473 [2024-12-10 04:16:11.017471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.473 [2024-12-10 04:16:11.017477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.473 [2024-12-10 04:16:11.017482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.473 [2024-12-10 04:16:11.022188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.473 [2024-12-10 04:16:11.022193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.473 [2024-12-10 04:16:11.090235] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:12.473 [2024-12-10 04:16:11.090507] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:12.473 [2024-12-10 04:16:11.090555] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 [2024-12-10 04:16:11.170904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 [2024-12-10 04:16:11.199312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 NULL1 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 Delay0 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=235840 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:12.473 04:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:12.473 [2024-12-10 04:16:11.313884] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:14.377 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:14.377 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.377 04:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 [2024-12-10 04:16:13.404615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78b40 is same with the state(6) to be set 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Write completed with error (sct=0, sc=8) 00:29:14.377 starting I/O failed: -6 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.377 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 [2024-12-10 04:16:13.405394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78780 is same with the state(6) to be set 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Write completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 Read completed with error (sct=0, sc=8) 00:29:14.378 starting I/O failed: -6 00:29:14.378 starting I/O failed: -6 00:29:14.378 starting I/O failed: -6 00:29:14.378 starting I/O failed: -6 00:29:15.315 [2024-12-10 04:16:14.367895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c799b0 is same with the state(6) to be set 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 [2024-12-10 04:16:14.408060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9af800d800 is same with the state(6) to be set 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 [2024-12-10 04:16:14.409411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9af800d060 is same with the state(6) to be set 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 [2024-12-10 04:16:14.409523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c782c0 is same with the state(6) to be set 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Write completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 Read completed with error (sct=0, sc=8) 00:29:15.315 [2024-12-10 04:16:14.410428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78960 is same with the state(6) to be set 00:29:15.315 Initializing NVMe Controllers 00:29:15.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:15.315 Controller IO queue size 128, less than required. 00:29:15.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:15.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:15.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:15.315 Initialization complete. Launching workers. 00:29:15.315 ======================================================== 00:29:15.315 Latency(us) 00:29:15.315 Device Information : IOPS MiB/s Average min max 00:29:15.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.24 0.08 912112.88 461.92 1042903.73 00:29:15.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 182.09 0.09 909751.97 410.84 1013527.24 00:29:15.315 ======================================================== 00:29:15.315 Total : 345.33 0.17 910867.98 410.84 1042903.73 00:29:15.315 00:29:15.315 [2024-12-10 04:16:14.410814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c799b0 (9): Bad file descriptor 00:29:15.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:15.315 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.315 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:15.315 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 235840 00:29:15.315 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 235840 00:29:15.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (235840) - No such process 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 235840 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 235840 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 235840 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:15.883 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.884 [2024-12-10 04:16:14.939123] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=236378 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 236378 00:29:15.884 04:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:15.884 [2024-12-10 04:16:15.022530] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:16.450 04:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:16.450 04:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 236378 00:29:16.450 04:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:16.708 04:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:16.708 04:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 236378 00:29:16.708 04:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:17.275 04:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:17.275 04:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 236378 00:29:17.275 04:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:17.843 04:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:17.843 04:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 236378 00:29:17.843 04:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:18.410 04:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:18.410 04:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 236378 00:29:18.410 04:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:18.978 04:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:18.978 04:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 236378 00:29:18.978 04:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:18.978 Initializing NVMe Controllers 00:29:18.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.978 Controller IO queue size 128, less than required. 00:29:18.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:18.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:18.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:18.978 Initialization complete. Launching workers. 00:29:18.978 ======================================================== 00:29:18.978 Latency(us) 00:29:18.978 Device Information : IOPS MiB/s Average min max 00:29:18.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002293.18 1000130.03 1041521.54 00:29:18.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005230.75 1000278.86 1042217.32 00:29:18.978 ======================================================== 00:29:18.978 Total : 256.00 0.12 1003761.97 1000130.03 1042217.32 00:29:18.978 00:29:19.236 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:19.236 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 236378 00:29:19.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (236378) - No such process 00:29:19.236 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 236378 00:29:19.236 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:19.236 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:19.236 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:19.236 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:19.236 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:19.236 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:19.236 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:19.237 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:19.237 rmmod nvme_tcp 00:29:19.237 rmmod nvme_fabrics 00:29:19.495 rmmod nvme_keyring 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 235689 ']' 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 235689 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 235689 ']' 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 235689 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235689 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235689' 00:29:19.495 killing process with pid 235689 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 235689 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 235689 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.495 04:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.030 04:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:22.030 00:29:22.030 real 0m16.121s 00:29:22.030 user 0m26.077s 00:29:22.030 sys 0m5.996s 00:29:22.030 04:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:22.030 04:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:22.030 ************************************ 00:29:22.030 END TEST nvmf_delete_subsystem 00:29:22.030 ************************************ 00:29:22.030 04:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:22.030 04:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:22.030 04:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:22.030 04:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:22.030 ************************************ 00:29:22.030 START TEST nvmf_host_management 00:29:22.030 ************************************ 00:29:22.030 04:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:22.030 * Looking for test storage... 00:29:22.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:22.030 04:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:22.030 04:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:22.030 04:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:22.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.030 --rc genhtml_branch_coverage=1 00:29:22.030 --rc genhtml_function_coverage=1 00:29:22.030 --rc genhtml_legend=1 00:29:22.030 --rc geninfo_all_blocks=1 00:29:22.030 --rc geninfo_unexecuted_blocks=1 00:29:22.030 00:29:22.030 ' 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:22.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.030 --rc genhtml_branch_coverage=1 00:29:22.030 --rc genhtml_function_coverage=1 00:29:22.030 --rc genhtml_legend=1 00:29:22.030 --rc geninfo_all_blocks=1 00:29:22.030 --rc geninfo_unexecuted_blocks=1 00:29:22.030 00:29:22.030 ' 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:22.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.030 --rc genhtml_branch_coverage=1 00:29:22.030 --rc genhtml_function_coverage=1 00:29:22.030 --rc genhtml_legend=1 00:29:22.030 --rc geninfo_all_blocks=1 00:29:22.030 --rc geninfo_unexecuted_blocks=1 00:29:22.030 00:29:22.030 ' 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:22.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.030 --rc genhtml_branch_coverage=1 00:29:22.030 --rc genhtml_function_coverage=1 00:29:22.030 --rc genhtml_legend=1 00:29:22.030 --rc geninfo_all_blocks=1 00:29:22.030 --rc geninfo_unexecuted_blocks=1 00:29:22.030 00:29:22.030 ' 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.030 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:22.031 04:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:28.601 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:28.602 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:28.602 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:28.602 Found net devices under 0000:af:00.0: cvl_0_0 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:28.602 Found net devices under 0000:af:00.1: cvl_0_1 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:28.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:29:28.602 00:29:28.602 --- 10.0.0.2 ping statistics --- 00:29:28.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.602 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:29:28.602 00:29:28.602 --- 10.0.0.1 ping statistics --- 00:29:28.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.602 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.602 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=240460 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 240460 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 240460 ']' 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.603 04:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 [2024-12-10 04:16:27.042391] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:28.603 [2024-12-10 04:16:27.043299] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:28.603 [2024-12-10 04:16:27.043334] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.603 [2024-12-10 04:16:27.123102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:28.603 [2024-12-10 04:16:27.163456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.603 [2024-12-10 04:16:27.163504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.603 [2024-12-10 04:16:27.163511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.603 [2024-12-10 04:16:27.163517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.603 [2024-12-10 04:16:27.163522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.603 [2024-12-10 04:16:27.164991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.603 [2024-12-10 04:16:27.165096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.603 [2024-12-10 04:16:27.165214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:28.603 [2024-12-10 04:16:27.165213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.603 [2024-12-10 04:16:27.232690] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:28.603 [2024-12-10 04:16:27.233497] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:28.603 [2024-12-10 04:16:27.233699] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:28.603 [2024-12-10 04:16:27.234133] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:28.603 [2024-12-10 04:16:27.234179] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 [2024-12-10 04:16:27.314071] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 Malloc0 00:29:28.603 [2024-12-10 04:16:27.406301] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=240553 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 240553 /var/tmp/bdevperf.sock 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 240553 ']' 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:28.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.603 { 00:29:28.603 "params": { 00:29:28.603 "name": "Nvme$subsystem", 00:29:28.603 "trtype": "$TEST_TRANSPORT", 00:29:28.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.603 "adrfam": "ipv4", 00:29:28.603 "trsvcid": "$NVMF_PORT", 00:29:28.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.603 "hdgst": ${hdgst:-false}, 00:29:28.603 "ddgst": ${ddgst:-false} 00:29:28.603 }, 00:29:28.603 "method": "bdev_nvme_attach_controller" 00:29:28.603 } 00:29:28.603 EOF 00:29:28.603 )") 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:28.603 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:28.603 "params": { 00:29:28.603 "name": "Nvme0", 00:29:28.603 "trtype": "tcp", 00:29:28.603 "traddr": "10.0.0.2", 00:29:28.603 "adrfam": "ipv4", 00:29:28.603 "trsvcid": "4420", 00:29:28.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:28.603 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:28.603 "hdgst": false, 00:29:28.603 "ddgst": false 00:29:28.603 }, 00:29:28.603 "method": "bdev_nvme_attach_controller" 00:29:28.603 }' 00:29:28.603 [2024-12-10 04:16:27.486910] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:28.603 [2024-12-10 04:16:27.486957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid240553 ] 00:29:28.603 [2024-12-10 04:16:27.561490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.603 [2024-12-10 04:16:27.600994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.863 Running I/O for 10 seconds... 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.863 04:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.863 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:29:28.863 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:29:28.863 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.123 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:29.123 [2024-12-10 04:16:28.317791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.123 [2024-12-10 04:16:28.317834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.123 [2024-12-10 04:16:28.317842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.123 [2024-12-10 04:16:28.317849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.123 [2024-12-10 04:16:28.317855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.123 [2024-12-10 04:16:28.317861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.317996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.318216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249e730 is same with the state(6) to be set 00:29:29.124 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.124 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:29.124 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.124 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:29.124 [2024-12-10 04:16:28.326159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.124 [2024-12-10 04:16:28.326197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.124 [2024-12-10 04:16:28.326214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.124 [2024-12-10 04:16:28.326228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.124 [2024-12-10 04:16:28.326242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0a7e0 is same with the state(6) to be set 00:29:29.124 [2024-12-10 04:16:28.326298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.124 [2024-12-10 04:16:28.326935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.124 [2024-12-10 04:16:28.326943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.326954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.326962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.326969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.326977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.326983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.326991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.326997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.327242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.125 [2024-12-10 04:16:28.327249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.125 [2024-12-10 04:16:28.328192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:29.125 task offset: 102400 on job bdev=Nvme0n1 fails 00:29:29.125 00:29:29.125 Latency(us) 00:29:29.125 [2024-12-10T03:16:28.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.125 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.125 Job: Nvme0n1 ended in about 0.41 seconds with error 00:29:29.125 Verification LBA range: start 0x0 length 0x400 00:29:29.125 Nvme0n1 : 0.41 1943.18 121.45 155.45 0.00 29698.79 1412.14 26588.89 00:29:29.125 [2024-12-10T03:16:28.411Z] =================================================================================================================== 00:29:29.125 [2024-12-10T03:16:28.411Z] Total : 1943.18 121.45 155.45 0.00 29698.79 1412.14 26588.89 00:29:29.125 [2024-12-10 04:16:28.330532] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:29.125 [2024-12-10 04:16:28.330551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0a7e0 (9): Bad file descriptor 00:29:29.125 [2024-12-10 04:16:28.333410] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:29.125 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.125 04:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:30.060 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 240553 00:29:30.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (240553) - No such process 00:29:30.060 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:30.060 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:30.319 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:30.319 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:30.319 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:30.319 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:30.319 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:30.319 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:30.319 { 00:29:30.319 "params": { 00:29:30.319 "name": "Nvme$subsystem", 00:29:30.319 "trtype": "$TEST_TRANSPORT", 00:29:30.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.319 "adrfam": "ipv4", 00:29:30.319 "trsvcid": "$NVMF_PORT", 00:29:30.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.320 "hdgst": ${hdgst:-false}, 00:29:30.320 "ddgst": ${ddgst:-false} 00:29:30.320 }, 00:29:30.320 "method": "bdev_nvme_attach_controller" 00:29:30.320 } 00:29:30.320 EOF 00:29:30.320 )") 00:29:30.320 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:30.320 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:30.320 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:30.320 04:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:30.320 "params": { 00:29:30.320 "name": "Nvme0", 00:29:30.320 "trtype": "tcp", 00:29:30.320 "traddr": "10.0.0.2", 00:29:30.320 "adrfam": "ipv4", 00:29:30.320 "trsvcid": "4420", 00:29:30.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:30.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:30.320 "hdgst": false, 00:29:30.320 "ddgst": false 00:29:30.320 }, 00:29:30.320 "method": "bdev_nvme_attach_controller" 00:29:30.320 }' 00:29:30.320 [2024-12-10 04:16:29.389622] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:30.320 [2024-12-10 04:16:29.389670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid240800 ] 00:29:30.320 [2024-12-10 04:16:29.462449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.320 [2024-12-10 04:16:29.500082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.578 Running I/O for 1 seconds... 00:29:31.955 1990.00 IOPS, 124.38 MiB/s 00:29:31.955 Latency(us) 00:29:31.955 [2024-12-10T03:16:31.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.955 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.955 Verification LBA range: start 0x0 length 0x400 00:29:31.955 Nvme0n1 : 1.01 2039.95 127.50 0.00 0.00 30868.26 1334.13 26588.89 00:29:31.955 [2024-12-10T03:16:31.241Z] =================================================================================================================== 00:29:31.955 [2024-12-10T03:16:31.241Z] Total : 2039.95 127.50 0.00 0.00 30868.26 1334.13 26588.89 00:29:31.955 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:31.955 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:31.955 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:31.956 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:31.956 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:31.956 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:31.956 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:31.956 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:31.956 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:31.956 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:31.956 04:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:31.956 rmmod nvme_tcp 00:29:31.956 rmmod nvme_fabrics 00:29:31.956 rmmod nvme_keyring 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 240460 ']' 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 240460 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 240460 ']' 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 240460 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 240460 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 240460' 00:29:31.956 killing process with pid 240460 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 240460 00:29:31.956 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 240460 00:29:32.214 [2024-12-10 04:16:31.269736] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:32.214 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.215 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.215 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.215 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:32.215 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:32.215 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.215 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.215 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.215 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.215 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.215 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.215 04:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.117 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.117 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:34.117 00:29:34.117 real 0m12.473s 00:29:34.117 user 0m18.668s 00:29:34.117 sys 0m6.248s 00:29:34.117 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.117 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:34.117 ************************************ 00:29:34.117 END TEST nvmf_host_management 00:29:34.117 ************************************ 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:34.377 ************************************ 00:29:34.377 START TEST nvmf_lvol 00:29:34.377 ************************************ 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:34.377 * Looking for test storage... 00:29:34.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:34.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.377 --rc genhtml_branch_coverage=1 00:29:34.377 --rc genhtml_function_coverage=1 00:29:34.377 --rc genhtml_legend=1 00:29:34.377 --rc geninfo_all_blocks=1 00:29:34.377 --rc geninfo_unexecuted_blocks=1 00:29:34.377 00:29:34.377 ' 00:29:34.377 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:34.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.377 --rc genhtml_branch_coverage=1 00:29:34.377 --rc genhtml_function_coverage=1 00:29:34.377 --rc genhtml_legend=1 00:29:34.378 --rc geninfo_all_blocks=1 00:29:34.378 --rc geninfo_unexecuted_blocks=1 00:29:34.378 00:29:34.378 ' 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:34.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.378 --rc genhtml_branch_coverage=1 00:29:34.378 --rc genhtml_function_coverage=1 00:29:34.378 --rc genhtml_legend=1 00:29:34.378 --rc geninfo_all_blocks=1 00:29:34.378 --rc geninfo_unexecuted_blocks=1 00:29:34.378 00:29:34.378 ' 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:34.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.378 --rc genhtml_branch_coverage=1 00:29:34.378 --rc genhtml_function_coverage=1 00:29:34.378 --rc genhtml_legend=1 00:29:34.378 --rc geninfo_all_blocks=1 00:29:34.378 --rc geninfo_unexecuted_blocks=1 00:29:34.378 00:29:34.378 ' 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:34.378 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:34.637 04:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:41.207 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:41.207 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:41.207 Found net devices under 0000:af:00.0: cvl_0_0 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:41.207 Found net devices under 0000:af:00.1: cvl_0_1 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:41.207 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:41.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:29:41.208 00:29:41.208 --- 10.0.0.2 ping statistics --- 00:29:41.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.208 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:29:41.208 00:29:41.208 --- 10.0.0.1 ping statistics --- 00:29:41.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.208 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=244496 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 244496 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 244496 ']' 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:41.208 [2024-12-10 04:16:39.563514] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:41.208 [2024-12-10 04:16:39.564422] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:29:41.208 [2024-12-10 04:16:39.564458] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.208 [2024-12-10 04:16:39.643599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:41.208 [2024-12-10 04:16:39.684463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.208 [2024-12-10 04:16:39.684499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.208 [2024-12-10 04:16:39.684505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.208 [2024-12-10 04:16:39.684512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.208 [2024-12-10 04:16:39.684517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.208 [2024-12-10 04:16:39.685800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.208 [2024-12-10 04:16:39.685912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.208 [2024-12-10 04:16:39.685913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.208 [2024-12-10 04:16:39.754344] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:41.208 [2024-12-10 04:16:39.755263] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:41.208 [2024-12-10 04:16:39.755381] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:41.208 [2024-12-10 04:16:39.755560] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.208 04:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:41.208 [2024-12-10 04:16:40.002688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.208 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:41.208 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:41.208 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:41.208 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:41.208 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:41.467 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:41.726 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ab75862c-11a1-4621-add3-8b44cdccf66c 00:29:41.726 04:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ab75862c-11a1-4621-add3-8b44cdccf66c lvol 20 00:29:41.984 04:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=663e6da8-c25b-4cef-b47e-12538a9578b8 00:29:41.984 04:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:41.984 04:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 663e6da8-c25b-4cef-b47e-12538a9578b8 00:29:42.243 04:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:42.501 [2024-12-10 04:16:41.598576] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.501 04:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.760 04:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=244964 00:29:42.760 04:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:42.760 04:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:43.696 04:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 663e6da8-c25b-4cef-b47e-12538a9578b8 MY_SNAPSHOT 00:29:43.955 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=381eac0c-ee50-4371-8e0a-1df51d188bb9 00:29:43.955 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 663e6da8-c25b-4cef-b47e-12538a9578b8 30 00:29:44.213 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 381eac0c-ee50-4371-8e0a-1df51d188bb9 MY_CLONE 00:29:44.472 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3b112d3a-234e-43d3-a9f6-d6d250218386 00:29:44.472 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3b112d3a-234e-43d3-a9f6-d6d250218386 00:29:44.731 04:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 244964 00:29:54.714 Initializing NVMe Controllers 00:29:54.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:54.714 Controller IO queue size 128, less than required. 00:29:54.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:54.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:54.714 Initialization complete. Launching workers. 00:29:54.714 ======================================================== 00:29:54.714 Latency(us) 00:29:54.714 Device Information : IOPS MiB/s Average min max 00:29:54.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12445.40 48.61 10291.10 4418.03 67153.05 00:29:54.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12549.00 49.02 10200.72 2388.93 68672.96 00:29:54.714 ======================================================== 00:29:54.714 Total : 24994.40 97.63 10245.72 2388.93 68672.96 00:29:54.714 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 663e6da8-c25b-4cef-b47e-12538a9578b8 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab75862c-11a1-4621-add3-8b44cdccf66c 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:54.714 rmmod nvme_tcp 00:29:54.714 rmmod nvme_fabrics 00:29:54.714 rmmod nvme_keyring 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 244496 ']' 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 244496 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 244496 ']' 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 244496 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 244496 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 244496' 00:29:54.714 killing process with pid 244496 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 244496 00:29:54.714 04:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 244496 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.715 04:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:56.210 00:29:56.210 real 0m21.759s 00:29:56.210 user 0m55.703s 00:29:56.210 sys 0m9.640s 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:56.210 ************************************ 00:29:56.210 END TEST nvmf_lvol 00:29:56.210 ************************************ 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:56.210 ************************************ 00:29:56.210 START TEST nvmf_lvs_grow 00:29:56.210 ************************************ 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:56.210 * Looking for test storage... 00:29:56.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:56.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.210 --rc genhtml_branch_coverage=1 00:29:56.210 --rc genhtml_function_coverage=1 00:29:56.210 --rc genhtml_legend=1 00:29:56.210 --rc geninfo_all_blocks=1 00:29:56.210 --rc geninfo_unexecuted_blocks=1 00:29:56.210 00:29:56.210 ' 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:56.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.210 --rc genhtml_branch_coverage=1 00:29:56.210 --rc genhtml_function_coverage=1 00:29:56.210 --rc genhtml_legend=1 00:29:56.210 --rc geninfo_all_blocks=1 00:29:56.210 --rc geninfo_unexecuted_blocks=1 00:29:56.210 00:29:56.210 ' 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:56.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.210 --rc genhtml_branch_coverage=1 00:29:56.210 --rc genhtml_function_coverage=1 00:29:56.210 --rc genhtml_legend=1 00:29:56.210 --rc geninfo_all_blocks=1 00:29:56.210 --rc geninfo_unexecuted_blocks=1 00:29:56.210 00:29:56.210 ' 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:56.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.210 --rc genhtml_branch_coverage=1 00:29:56.210 --rc genhtml_function_coverage=1 00:29:56.210 --rc genhtml_legend=1 00:29:56.210 --rc geninfo_all_blocks=1 00:29:56.210 --rc geninfo_unexecuted_blocks=1 00:29:56.210 00:29:56.210 ' 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.210 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:56.211 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:56.471 04:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:03.039 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:03.040 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:03.040 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:03.040 Found net devices under 0000:af:00.0: cvl_0_0 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:03.040 Found net devices under 0000:af:00.1: cvl_0_1 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:03.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:30:03.040 00:30:03.040 --- 10.0.0.2 ping statistics --- 00:30:03.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.040 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:30:03.040 00:30:03.040 --- 10.0.0.1 ping statistics --- 00:30:03.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.040 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=250211 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 250211 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 250211 ']' 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:03.040 [2024-12-10 04:17:01.430190] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:03.040 [2024-12-10 04:17:01.431122] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:03.040 [2024-12-10 04:17:01.431154] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.040 [2024-12-10 04:17:01.509424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.040 [2024-12-10 04:17:01.549063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.040 [2024-12-10 04:17:01.549099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.040 [2024-12-10 04:17:01.549105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.040 [2024-12-10 04:17:01.549111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.040 [2024-12-10 04:17:01.549116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.040 [2024-12-10 04:17:01.549623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.040 [2024-12-10 04:17:01.616962] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:03.040 [2024-12-10 04:17:01.617157] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:03.040 [2024-12-10 04:17:01.850278] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:03.040 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:03.040 ************************************ 00:30:03.040 START TEST lvs_grow_clean 00:30:03.040 ************************************ 00:30:03.041 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:03.041 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:03.041 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:03.041 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:03.041 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:03.041 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:03.041 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:03.041 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:03.041 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:03.041 04:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:03.041 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:03.041 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:03.300 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:03.300 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:03.300 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:03.300 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:03.300 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:03.300 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a lvol 150 00:30:03.559 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=aa0828bf-6ad9-44a1-b8ed-4780a367559f 00:30:03.559 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:03.559 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:03.817 [2024-12-10 04:17:02.914004] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:03.817 [2024-12-10 04:17:02.914130] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:03.817 true 00:30:03.817 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:03.817 04:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:04.075 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:04.075 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:04.075 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aa0828bf-6ad9-44a1-b8ed-4780a367559f 00:30:04.334 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:04.593 [2024-12-10 04:17:03.642508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.593 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:04.593 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=250577 00:30:04.593 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:04.593 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:04.593 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 250577 /var/tmp/bdevperf.sock 00:30:04.593 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 250577 ']' 00:30:04.593 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:04.593 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.593 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:04.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:04.593 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.593 04:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:04.852 [2024-12-10 04:17:03.887864] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:04.852 [2024-12-10 04:17:03.887912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250577 ] 00:30:04.852 [2024-12-10 04:17:03.961926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.852 [2024-12-10 04:17:04.002668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.852 04:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.852 04:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:04.852 04:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:05.420 Nvme0n1 00:30:05.420 04:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:05.420 [ 00:30:05.420 { 00:30:05.420 "name": "Nvme0n1", 00:30:05.420 "aliases": [ 00:30:05.420 "aa0828bf-6ad9-44a1-b8ed-4780a367559f" 00:30:05.420 ], 00:30:05.420 "product_name": "NVMe disk", 00:30:05.420 "block_size": 4096, 00:30:05.420 "num_blocks": 38912, 00:30:05.420 "uuid": "aa0828bf-6ad9-44a1-b8ed-4780a367559f", 00:30:05.420 "numa_id": 1, 00:30:05.420 "assigned_rate_limits": { 00:30:05.420 "rw_ios_per_sec": 0, 00:30:05.420 "rw_mbytes_per_sec": 0, 00:30:05.420 "r_mbytes_per_sec": 0, 00:30:05.420 "w_mbytes_per_sec": 0 00:30:05.420 }, 00:30:05.420 "claimed": false, 00:30:05.420 "zoned": false, 00:30:05.420 "supported_io_types": { 00:30:05.420 "read": true, 00:30:05.420 "write": true, 00:30:05.420 "unmap": true, 00:30:05.420 "flush": true, 00:30:05.420 "reset": true, 00:30:05.420 "nvme_admin": true, 00:30:05.420 "nvme_io": true, 00:30:05.420 "nvme_io_md": false, 00:30:05.420 "write_zeroes": true, 00:30:05.420 "zcopy": false, 00:30:05.420 "get_zone_info": false, 00:30:05.420 "zone_management": false, 00:30:05.420 "zone_append": false, 00:30:05.420 "compare": true, 00:30:05.420 "compare_and_write": true, 00:30:05.420 "abort": true, 00:30:05.420 "seek_hole": false, 00:30:05.420 "seek_data": false, 00:30:05.420 "copy": true, 00:30:05.420 "nvme_iov_md": false 00:30:05.420 }, 00:30:05.420 "memory_domains": [ 00:30:05.420 { 00:30:05.420 "dma_device_id": "system", 00:30:05.420 "dma_device_type": 1 00:30:05.420 } 00:30:05.420 ], 00:30:05.420 "driver_specific": { 00:30:05.420 "nvme": [ 00:30:05.420 { 00:30:05.420 "trid": { 00:30:05.420 "trtype": "TCP", 00:30:05.420 "adrfam": "IPv4", 00:30:05.420 "traddr": "10.0.0.2", 00:30:05.420 "trsvcid": "4420", 00:30:05.420 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:05.420 }, 00:30:05.420 "ctrlr_data": { 00:30:05.420 "cntlid": 1, 00:30:05.420 "vendor_id": "0x8086", 00:30:05.420 "model_number": "SPDK bdev Controller", 00:30:05.420 "serial_number": "SPDK0", 00:30:05.420 "firmware_revision": "25.01", 00:30:05.420 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:05.420 "oacs": { 00:30:05.420 "security": 0, 00:30:05.420 "format": 0, 00:30:05.420 "firmware": 0, 00:30:05.420 "ns_manage": 0 00:30:05.420 }, 00:30:05.420 "multi_ctrlr": true, 00:30:05.420 "ana_reporting": false 00:30:05.420 }, 00:30:05.420 "vs": { 00:30:05.420 "nvme_version": "1.3" 00:30:05.420 }, 00:30:05.420 "ns_data": { 00:30:05.420 "id": 1, 00:30:05.420 "can_share": true 00:30:05.420 } 00:30:05.420 } 00:30:05.420 ], 00:30:05.420 "mp_policy": "active_passive" 00:30:05.420 } 00:30:05.420 } 00:30:05.420 ] 00:30:05.679 04:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=250727 00:30:05.679 04:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:05.679 04:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:05.679 Running I/O for 10 seconds... 00:30:06.625 Latency(us) 00:30:06.625 [2024-12-10T03:17:05.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:06.625 Nvme0n1 : 1.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:06.625 [2024-12-10T03:17:05.911Z] =================================================================================================================== 00:30:06.625 [2024-12-10T03:17:05.911Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:06.625 00:30:07.560 04:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:07.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:07.561 Nvme0n1 : 2.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:07.561 [2024-12-10T03:17:06.847Z] =================================================================================================================== 00:30:07.561 [2024-12-10T03:17:06.847Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:07.561 00:30:07.819 true 00:30:07.819 04:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:07.819 04:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:07.819 04:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:07.819 04:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:07.819 04:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 250727 00:30:08.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:08.754 Nvme0n1 : 3.00 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:30:08.754 [2024-12-10T03:17:08.040Z] =================================================================================================================== 00:30:08.754 [2024-12-10T03:17:08.040Z] Total : 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:30:08.754 00:30:09.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:09.691 Nvme0n1 : 4.00 23145.75 90.41 0.00 0.00 0.00 0.00 0.00 00:30:09.691 [2024-12-10T03:17:08.977Z] =================================================================================================================== 00:30:09.691 [2024-12-10T03:17:08.977Z] Total : 23145.75 90.41 0.00 0.00 0.00 0.00 0.00 00:30:09.691 00:30:10.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:10.627 Nvme0n1 : 5.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:10.627 [2024-12-10T03:17:09.913Z] =================================================================================================================== 00:30:10.627 [2024-12-10T03:17:09.913Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:10.627 00:30:11.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.563 Nvme0n1 : 6.00 23219.83 90.70 0.00 0.00 0.00 0.00 0.00 00:30:11.563 [2024-12-10T03:17:10.849Z] =================================================================================================================== 00:30:11.563 [2024-12-10T03:17:10.849Z] Total : 23219.83 90.70 0.00 0.00 0.00 0.00 0.00 00:30:11.563 00:30:12.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.938 Nvme0n1 : 7.00 23277.29 90.93 0.00 0.00 0.00 0.00 0.00 00:30:12.938 [2024-12-10T03:17:12.224Z] =================================================================================================================== 00:30:12.938 [2024-12-10T03:17:12.224Z] Total : 23277.29 90.93 0.00 0.00 0.00 0.00 0.00 00:30:12.939 00:30:13.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.873 Nvme0n1 : 8.00 23322.50 91.10 0.00 0.00 0.00 0.00 0.00 00:30:13.873 [2024-12-10T03:17:13.159Z] =================================================================================================================== 00:30:13.873 [2024-12-10T03:17:13.159Z] Total : 23322.50 91.10 0.00 0.00 0.00 0.00 0.00 00:30:13.873 00:30:14.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.810 Nvme0n1 : 9.00 23369.89 91.29 0.00 0.00 0.00 0.00 0.00 00:30:14.810 [2024-12-10T03:17:14.096Z] =================================================================================================================== 00:30:14.810 [2024-12-10T03:17:14.096Z] Total : 23369.89 91.29 0.00 0.00 0.00 0.00 0.00 00:30:14.810 00:30:15.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.746 Nvme0n1 : 10.00 23395.10 91.39 0.00 0.00 0.00 0.00 0.00 00:30:15.746 [2024-12-10T03:17:15.032Z] =================================================================================================================== 00:30:15.746 [2024-12-10T03:17:15.032Z] Total : 23395.10 91.39 0.00 0.00 0.00 0.00 0.00 00:30:15.746 00:30:15.746 00:30:15.746 Latency(us) 00:30:15.746 [2024-12-10T03:17:15.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.746 Nvme0n1 : 10.00 23400.50 91.41 0.00 0.00 5466.79 3229.99 25964.74 00:30:15.746 [2024-12-10T03:17:15.032Z] =================================================================================================================== 00:30:15.746 [2024-12-10T03:17:15.032Z] Total : 23400.50 91.41 0.00 0.00 5466.79 3229.99 25964.74 00:30:15.746 { 00:30:15.746 "results": [ 00:30:15.746 { 00:30:15.746 "job": "Nvme0n1", 00:30:15.746 "core_mask": "0x2", 00:30:15.746 "workload": "randwrite", 00:30:15.746 "status": "finished", 00:30:15.746 "queue_depth": 128, 00:30:15.746 "io_size": 4096, 00:30:15.746 "runtime": 10.003163, 00:30:15.746 "iops": 23400.49842234901, 00:30:15.746 "mibps": 91.40819696230082, 00:30:15.746 "io_failed": 0, 00:30:15.746 "io_timeout": 0, 00:30:15.746 "avg_latency_us": 5466.785024233781, 00:30:15.746 "min_latency_us": 3229.9885714285715, 00:30:15.746 "max_latency_us": 25964.73904761905 00:30:15.746 } 00:30:15.746 ], 00:30:15.746 "core_count": 1 00:30:15.746 } 00:30:15.746 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 250577 00:30:15.746 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 250577 ']' 00:30:15.746 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 250577 00:30:15.746 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:15.746 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:15.746 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 250577 00:30:15.746 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:15.747 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:15.747 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 250577' 00:30:15.747 killing process with pid 250577 00:30:15.747 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 250577 00:30:15.747 Received shutdown signal, test time was about 10.000000 seconds 00:30:15.747 00:30:15.747 Latency(us) 00:30:15.747 [2024-12-10T03:17:15.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.747 [2024-12-10T03:17:15.033Z] =================================================================================================================== 00:30:15.747 [2024-12-10T03:17:15.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:15.747 04:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 250577 00:30:16.005 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:16.005 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:16.263 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:16.263 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:16.521 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:16.521 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:16.521 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:16.780 [2024-12-10 04:17:15.822069] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:16.780 04:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:16.780 request: 00:30:16.780 { 00:30:16.780 "uuid": "62e83c6c-b54e-4bff-b7ae-b71e440d474a", 00:30:16.780 "method": "bdev_lvol_get_lvstores", 00:30:16.780 "req_id": 1 00:30:16.780 } 00:30:16.780 Got JSON-RPC error response 00:30:16.780 response: 00:30:16.780 { 00:30:16.780 "code": -19, 00:30:16.780 "message": "No such device" 00:30:16.780 } 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:17.039 aio_bdev 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aa0828bf-6ad9-44a1-b8ed-4780a367559f 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=aa0828bf-6ad9-44a1-b8ed-4780a367559f 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:17.039 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:17.298 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aa0828bf-6ad9-44a1-b8ed-4780a367559f -t 2000 00:30:17.557 [ 00:30:17.557 { 00:30:17.557 "name": "aa0828bf-6ad9-44a1-b8ed-4780a367559f", 00:30:17.557 "aliases": [ 00:30:17.557 "lvs/lvol" 00:30:17.557 ], 00:30:17.557 "product_name": "Logical Volume", 00:30:17.557 "block_size": 4096, 00:30:17.557 "num_blocks": 38912, 00:30:17.557 "uuid": "aa0828bf-6ad9-44a1-b8ed-4780a367559f", 00:30:17.557 "assigned_rate_limits": { 00:30:17.557 "rw_ios_per_sec": 0, 00:30:17.557 "rw_mbytes_per_sec": 0, 00:30:17.557 "r_mbytes_per_sec": 0, 00:30:17.557 "w_mbytes_per_sec": 0 00:30:17.557 }, 00:30:17.557 "claimed": false, 00:30:17.557 "zoned": false, 00:30:17.557 "supported_io_types": { 00:30:17.557 "read": true, 00:30:17.557 "write": true, 00:30:17.557 "unmap": true, 00:30:17.557 "flush": false, 00:30:17.557 "reset": true, 00:30:17.557 "nvme_admin": false, 00:30:17.557 "nvme_io": false, 00:30:17.557 "nvme_io_md": false, 00:30:17.557 "write_zeroes": true, 00:30:17.557 "zcopy": false, 00:30:17.557 "get_zone_info": false, 00:30:17.557 "zone_management": false, 00:30:17.557 "zone_append": false, 00:30:17.557 "compare": false, 00:30:17.557 "compare_and_write": false, 00:30:17.557 "abort": false, 00:30:17.557 "seek_hole": true, 00:30:17.557 "seek_data": true, 00:30:17.557 "copy": false, 00:30:17.557 "nvme_iov_md": false 00:30:17.557 }, 00:30:17.557 "driver_specific": { 00:30:17.557 "lvol": { 00:30:17.557 "lvol_store_uuid": "62e83c6c-b54e-4bff-b7ae-b71e440d474a", 00:30:17.557 "base_bdev": "aio_bdev", 00:30:17.557 "thin_provision": false, 00:30:17.557 "num_allocated_clusters": 38, 00:30:17.557 "snapshot": false, 00:30:17.557 "clone": false, 00:30:17.557 "esnap_clone": false 00:30:17.557 } 00:30:17.557 } 00:30:17.557 } 00:30:17.557 ] 00:30:17.557 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:17.557 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:17.557 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:17.817 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:17.817 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:17.817 04:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:17.817 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:17.817 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aa0828bf-6ad9-44a1-b8ed-4780a367559f 00:30:18.076 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 62e83c6c-b54e-4bff-b7ae-b71e440d474a 00:30:18.334 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:18.334 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:18.593 00:30:18.593 real 0m15.730s 00:30:18.593 user 0m15.241s 00:30:18.593 sys 0m1.498s 00:30:18.593 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:18.593 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:18.593 ************************************ 00:30:18.593 END TEST lvs_grow_clean 00:30:18.594 ************************************ 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:18.594 ************************************ 00:30:18.594 START TEST lvs_grow_dirty 00:30:18.594 ************************************ 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:18.594 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:18.852 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:18.852 04:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:19.111 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:19.111 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:19.111 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:19.111 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:19.111 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:19.111 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 lvol 150 00:30:19.370 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=24c6c3bc-8352-4266-aec5-2ddbf7ad19a3 00:30:19.370 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:19.370 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:19.629 [2024-12-10 04:17:18.702028] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:19.629 [2024-12-10 04:17:18.702155] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:19.629 true 00:30:19.629 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:19.629 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:19.888 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:19.888 04:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:19.888 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 24c6c3bc-8352-4266-aec5-2ddbf7ad19a3 00:30:20.147 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:20.406 [2024-12-10 04:17:19.490467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.406 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:20.406 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=253228 00:30:20.406 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:20.406 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:20.406 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 253228 /var/tmp/bdevperf.sock 00:30:20.406 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 253228 ']' 00:30:20.406 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:20.406 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.406 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:20.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:20.406 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.406 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:20.666 [2024-12-10 04:17:19.722319] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:20.666 [2024-12-10 04:17:19.722369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253228 ] 00:30:20.666 [2024-12-10 04:17:19.796160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.666 [2024-12-10 04:17:19.836877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.666 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.666 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:20.666 04:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:21.234 Nvme0n1 00:30:21.234 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:21.234 [ 00:30:21.235 { 00:30:21.235 "name": "Nvme0n1", 00:30:21.235 "aliases": [ 00:30:21.235 "24c6c3bc-8352-4266-aec5-2ddbf7ad19a3" 00:30:21.235 ], 00:30:21.235 "product_name": "NVMe disk", 00:30:21.235 "block_size": 4096, 00:30:21.235 "num_blocks": 38912, 00:30:21.235 "uuid": "24c6c3bc-8352-4266-aec5-2ddbf7ad19a3", 00:30:21.235 "numa_id": 1, 00:30:21.235 "assigned_rate_limits": { 00:30:21.235 "rw_ios_per_sec": 0, 00:30:21.235 "rw_mbytes_per_sec": 0, 00:30:21.235 "r_mbytes_per_sec": 0, 00:30:21.235 "w_mbytes_per_sec": 0 00:30:21.235 }, 00:30:21.235 "claimed": false, 00:30:21.235 "zoned": false, 00:30:21.235 "supported_io_types": { 00:30:21.235 "read": true, 00:30:21.235 "write": true, 00:30:21.235 "unmap": true, 00:30:21.235 "flush": true, 00:30:21.235 "reset": true, 00:30:21.235 "nvme_admin": true, 00:30:21.235 "nvme_io": true, 00:30:21.235 "nvme_io_md": false, 00:30:21.235 "write_zeroes": true, 00:30:21.235 "zcopy": false, 00:30:21.235 "get_zone_info": false, 00:30:21.235 "zone_management": false, 00:30:21.235 "zone_append": false, 00:30:21.235 "compare": true, 00:30:21.235 "compare_and_write": true, 00:30:21.235 "abort": true, 00:30:21.235 "seek_hole": false, 00:30:21.235 "seek_data": false, 00:30:21.235 "copy": true, 00:30:21.235 "nvme_iov_md": false 00:30:21.235 }, 00:30:21.235 "memory_domains": [ 00:30:21.235 { 00:30:21.235 "dma_device_id": "system", 00:30:21.235 "dma_device_type": 1 00:30:21.235 } 00:30:21.235 ], 00:30:21.235 "driver_specific": { 00:30:21.235 "nvme": [ 00:30:21.235 { 00:30:21.235 "trid": { 00:30:21.235 "trtype": "TCP", 00:30:21.235 "adrfam": "IPv4", 00:30:21.235 "traddr": "10.0.0.2", 00:30:21.235 "trsvcid": "4420", 00:30:21.235 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:21.235 }, 00:30:21.235 "ctrlr_data": { 00:30:21.235 "cntlid": 1, 00:30:21.235 "vendor_id": "0x8086", 00:30:21.235 "model_number": "SPDK bdev Controller", 00:30:21.235 "serial_number": "SPDK0", 00:30:21.235 "firmware_revision": "25.01", 00:30:21.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.235 "oacs": { 00:30:21.235 "security": 0, 00:30:21.235 "format": 0, 00:30:21.235 "firmware": 0, 00:30:21.235 "ns_manage": 0 00:30:21.235 }, 00:30:21.235 "multi_ctrlr": true, 00:30:21.235 "ana_reporting": false 00:30:21.235 }, 00:30:21.235 "vs": { 00:30:21.235 "nvme_version": "1.3" 00:30:21.235 }, 00:30:21.235 "ns_data": { 00:30:21.235 "id": 1, 00:30:21.235 "can_share": true 00:30:21.235 } 00:30:21.235 } 00:30:21.235 ], 00:30:21.235 "mp_policy": "active_passive" 00:30:21.235 } 00:30:21.235 } 00:30:21.235 ] 00:30:21.235 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=253244 00:30:21.235 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:21.235 04:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:21.494 Running I/O for 10 seconds... 00:30:22.430 Latency(us) 00:30:22.430 [2024-12-10T03:17:21.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.430 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:22.430 [2024-12-10T03:17:21.716Z] =================================================================================================================== 00:30:22.430 [2024-12-10T03:17:21.716Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:22.430 00:30:23.366 04:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:23.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:23.366 Nvme0n1 : 2.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:23.366 [2024-12-10T03:17:22.652Z] =================================================================================================================== 00:30:23.366 [2024-12-10T03:17:22.652Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:23.366 00:30:23.366 true 00:30:23.366 04:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:23.366 04:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:23.625 04:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:23.625 04:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:23.625 04:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 253244 00:30:24.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:24.563 Nvme0n1 : 3.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:30:24.563 [2024-12-10T03:17:23.849Z] =================================================================================================================== 00:30:24.563 [2024-12-10T03:17:23.849Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:30:24.563 00:30:25.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:25.499 Nvme0n1 : 4.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:25.499 [2024-12-10T03:17:24.785Z] =================================================================================================================== 00:30:25.499 [2024-12-10T03:17:24.785Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:25.499 00:30:26.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:26.433 Nvme0n1 : 5.00 23324.00 91.11 0.00 0.00 0.00 0.00 0.00 00:30:26.433 [2024-12-10T03:17:25.719Z] =================================================================================================================== 00:30:26.433 [2024-12-10T03:17:25.719Z] Total : 23324.00 91.11 0.00 0.00 0.00 0.00 0.00 00:30:26.433 00:30:27.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:27.369 Nvme0n1 : 6.00 23394.83 91.39 0.00 0.00 0.00 0.00 0.00 00:30:27.369 [2024-12-10T03:17:26.655Z] =================================================================================================================== 00:30:27.369 [2024-12-10T03:17:26.655Z] Total : 23394.83 91.39 0.00 0.00 0.00 0.00 0.00 00:30:27.369 00:30:28.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:28.305 Nvme0n1 : 7.00 23427.29 91.51 0.00 0.00 0.00 0.00 0.00 00:30:28.305 [2024-12-10T03:17:27.591Z] =================================================================================================================== 00:30:28.305 [2024-12-10T03:17:27.591Z] Total : 23427.29 91.51 0.00 0.00 0.00 0.00 0.00 00:30:28.305 00:30:29.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:29.682 Nvme0n1 : 8.00 23467.50 91.67 0.00 0.00 0.00 0.00 0.00 00:30:29.682 [2024-12-10T03:17:28.968Z] =================================================================================================================== 00:30:29.682 [2024-12-10T03:17:28.968Z] Total : 23467.50 91.67 0.00 0.00 0.00 0.00 0.00 00:30:29.682 00:30:30.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:30.617 Nvme0n1 : 9.00 23491.78 91.76 0.00 0.00 0.00 0.00 0.00 00:30:30.617 [2024-12-10T03:17:29.903Z] =================================================================================================================== 00:30:30.617 [2024-12-10T03:17:29.903Z] Total : 23491.78 91.76 0.00 0.00 0.00 0.00 0.00 00:30:30.617 00:30:31.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:31.550 Nvme0n1 : 10.00 23463.70 91.66 0.00 0.00 0.00 0.00 0.00 00:30:31.550 [2024-12-10T03:17:30.836Z] =================================================================================================================== 00:30:31.550 [2024-12-10T03:17:30.836Z] Total : 23463.70 91.66 0.00 0.00 0.00 0.00 0.00 00:30:31.550 00:30:31.550 00:30:31.550 Latency(us) 00:30:31.550 [2024-12-10T03:17:30.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:31.550 Nvme0n1 : 10.00 23468.79 91.67 0.00 0.00 5451.02 3245.59 26713.72 00:30:31.550 [2024-12-10T03:17:30.836Z] =================================================================================================================== 00:30:31.550 [2024-12-10T03:17:30.836Z] Total : 23468.79 91.67 0.00 0.00 5451.02 3245.59 26713.72 00:30:31.550 { 00:30:31.550 "results": [ 00:30:31.550 { 00:30:31.550 "job": "Nvme0n1", 00:30:31.550 "core_mask": "0x2", 00:30:31.550 "workload": "randwrite", 00:30:31.550 "status": "finished", 00:30:31.550 "queue_depth": 128, 00:30:31.550 "io_size": 4096, 00:30:31.550 "runtime": 10.003284, 00:30:31.550 "iops": 23468.792848428577, 00:30:31.550 "mibps": 91.67497206417413, 00:30:31.550 "io_failed": 0, 00:30:31.550 "io_timeout": 0, 00:30:31.550 "avg_latency_us": 5451.021652201341, 00:30:31.550 "min_latency_us": 3245.592380952381, 00:30:31.550 "max_latency_us": 26713.721904761904 00:30:31.550 } 00:30:31.550 ], 00:30:31.550 "core_count": 1 00:30:31.550 } 00:30:31.550 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 253228 00:30:31.550 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 253228 ']' 00:30:31.550 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 253228 00:30:31.550 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:31.550 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:31.550 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 253228 00:30:31.550 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:31.550 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:31.550 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 253228' 00:30:31.550 killing process with pid 253228 00:30:31.550 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 253228 00:30:31.550 Received shutdown signal, test time was about 10.000000 seconds 00:30:31.550 00:30:31.550 Latency(us) 00:30:31.550 [2024-12-10T03:17:30.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.550 [2024-12-10T03:17:30.837Z] =================================================================================================================== 00:30:31.551 [2024-12-10T03:17:30.837Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:31.551 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 253228 00:30:31.551 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:31.809 04:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:32.068 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:32.068 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 250211 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 250211 00:30:32.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 250211 Killed "${NVMF_APP[@]}" "$@" 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=255042 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 255042 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 255042 ']' 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.327 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:32.327 [2024-12-10 04:17:31.489969] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:32.327 [2024-12-10 04:17:31.490851] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:32.327 [2024-12-10 04:17:31.490885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.327 [2024-12-10 04:17:31.567509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.327 [2024-12-10 04:17:31.606907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.327 [2024-12-10 04:17:31.606940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.327 [2024-12-10 04:17:31.606947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.327 [2024-12-10 04:17:31.606953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.327 [2024-12-10 04:17:31.606958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.327 [2024-12-10 04:17:31.607410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.586 [2024-12-10 04:17:31.673770] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:32.586 [2024-12-10 04:17:31.673985] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:32.586 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.586 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:32.586 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.586 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.586 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:32.586 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.587 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:32.845 [2024-12-10 04:17:31.908784] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:32.846 [2024-12-10 04:17:31.908993] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:32.846 [2024-12-10 04:17:31.909078] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:32.846 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:32.846 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 24c6c3bc-8352-4266-aec5-2ddbf7ad19a3 00:30:32.846 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=24c6c3bc-8352-4266-aec5-2ddbf7ad19a3 00:30:32.846 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:32.846 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:32.846 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:32.846 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:32.846 04:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:33.105 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 24c6c3bc-8352-4266-aec5-2ddbf7ad19a3 -t 2000 00:30:33.105 [ 00:30:33.105 { 00:30:33.105 "name": "24c6c3bc-8352-4266-aec5-2ddbf7ad19a3", 00:30:33.105 "aliases": [ 00:30:33.105 "lvs/lvol" 00:30:33.105 ], 00:30:33.105 "product_name": "Logical Volume", 00:30:33.105 "block_size": 4096, 00:30:33.105 "num_blocks": 38912, 00:30:33.105 "uuid": "24c6c3bc-8352-4266-aec5-2ddbf7ad19a3", 00:30:33.105 "assigned_rate_limits": { 00:30:33.105 "rw_ios_per_sec": 0, 00:30:33.105 "rw_mbytes_per_sec": 0, 00:30:33.105 "r_mbytes_per_sec": 0, 00:30:33.105 "w_mbytes_per_sec": 0 00:30:33.105 }, 00:30:33.105 "claimed": false, 00:30:33.105 "zoned": false, 00:30:33.105 "supported_io_types": { 00:30:33.105 "read": true, 00:30:33.105 "write": true, 00:30:33.105 "unmap": true, 00:30:33.105 "flush": false, 00:30:33.105 "reset": true, 00:30:33.105 "nvme_admin": false, 00:30:33.105 "nvme_io": false, 00:30:33.105 "nvme_io_md": false, 00:30:33.105 "write_zeroes": true, 00:30:33.105 "zcopy": false, 00:30:33.105 "get_zone_info": false, 00:30:33.105 "zone_management": false, 00:30:33.105 "zone_append": false, 00:30:33.105 "compare": false, 00:30:33.105 "compare_and_write": false, 00:30:33.105 "abort": false, 00:30:33.105 "seek_hole": true, 00:30:33.105 "seek_data": true, 00:30:33.105 "copy": false, 00:30:33.105 "nvme_iov_md": false 00:30:33.105 }, 00:30:33.105 "driver_specific": { 00:30:33.105 "lvol": { 00:30:33.105 "lvol_store_uuid": "41d4a3b7-8685-4022-b38b-4e713dba6f62", 00:30:33.105 "base_bdev": "aio_bdev", 00:30:33.105 "thin_provision": false, 00:30:33.105 "num_allocated_clusters": 38, 00:30:33.105 "snapshot": false, 00:30:33.105 "clone": false, 00:30:33.105 "esnap_clone": false 00:30:33.105 } 00:30:33.105 } 00:30:33.105 } 00:30:33.105 ] 00:30:33.105 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:33.105 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:33.105 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:33.364 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:33.364 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:33.364 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:33.623 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:33.623 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:33.623 [2024-12-10 04:17:32.891894] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:33.882 04:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:33.882 request: 00:30:33.882 { 00:30:33.882 "uuid": "41d4a3b7-8685-4022-b38b-4e713dba6f62", 00:30:33.882 "method": "bdev_lvol_get_lvstores", 00:30:33.882 "req_id": 1 00:30:33.882 } 00:30:33.882 Got JSON-RPC error response 00:30:33.882 response: 00:30:33.882 { 00:30:33.882 "code": -19, 00:30:33.882 "message": "No such device" 00:30:33.882 } 00:30:33.882 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:33.882 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:33.882 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:33.882 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:33.882 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:34.141 aio_bdev 00:30:34.141 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 24c6c3bc-8352-4266-aec5-2ddbf7ad19a3 00:30:34.141 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=24c6c3bc-8352-4266-aec5-2ddbf7ad19a3 00:30:34.141 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:34.141 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:34.141 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:34.141 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:34.141 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:34.400 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 24c6c3bc-8352-4266-aec5-2ddbf7ad19a3 -t 2000 00:30:34.400 [ 00:30:34.400 { 00:30:34.400 "name": "24c6c3bc-8352-4266-aec5-2ddbf7ad19a3", 00:30:34.400 "aliases": [ 00:30:34.400 "lvs/lvol" 00:30:34.400 ], 00:30:34.400 "product_name": "Logical Volume", 00:30:34.400 "block_size": 4096, 00:30:34.400 "num_blocks": 38912, 00:30:34.400 "uuid": "24c6c3bc-8352-4266-aec5-2ddbf7ad19a3", 00:30:34.400 "assigned_rate_limits": { 00:30:34.400 "rw_ios_per_sec": 0, 00:30:34.400 "rw_mbytes_per_sec": 0, 00:30:34.400 "r_mbytes_per_sec": 0, 00:30:34.400 "w_mbytes_per_sec": 0 00:30:34.400 }, 00:30:34.400 "claimed": false, 00:30:34.400 "zoned": false, 00:30:34.400 "supported_io_types": { 00:30:34.400 "read": true, 00:30:34.400 "write": true, 00:30:34.400 "unmap": true, 00:30:34.400 "flush": false, 00:30:34.400 "reset": true, 00:30:34.400 "nvme_admin": false, 00:30:34.400 "nvme_io": false, 00:30:34.400 "nvme_io_md": false, 00:30:34.400 "write_zeroes": true, 00:30:34.400 "zcopy": false, 00:30:34.400 "get_zone_info": false, 00:30:34.400 "zone_management": false, 00:30:34.400 "zone_append": false, 00:30:34.400 "compare": false, 00:30:34.400 "compare_and_write": false, 00:30:34.400 "abort": false, 00:30:34.400 "seek_hole": true, 00:30:34.400 "seek_data": true, 00:30:34.400 "copy": false, 00:30:34.400 "nvme_iov_md": false 00:30:34.400 }, 00:30:34.400 "driver_specific": { 00:30:34.400 "lvol": { 00:30:34.400 "lvol_store_uuid": "41d4a3b7-8685-4022-b38b-4e713dba6f62", 00:30:34.400 "base_bdev": "aio_bdev", 00:30:34.400 "thin_provision": false, 00:30:34.400 "num_allocated_clusters": 38, 00:30:34.400 "snapshot": false, 00:30:34.400 "clone": false, 00:30:34.400 "esnap_clone": false 00:30:34.400 } 00:30:34.400 } 00:30:34.400 } 00:30:34.400 ] 00:30:34.659 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:34.659 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:34.659 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:34.659 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:34.659 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:34.659 04:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:34.918 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:34.918 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 24c6c3bc-8352-4266-aec5-2ddbf7ad19a3 00:30:35.176 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 41d4a3b7-8685-4022-b38b-4e713dba6f62 00:30:35.435 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:35.435 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:35.435 00:30:35.435 real 0m16.976s 00:30:35.435 user 0m34.452s 00:30:35.435 sys 0m3.788s 00:30:35.435 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.435 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:35.435 ************************************ 00:30:35.435 END TEST lvs_grow_dirty 00:30:35.435 ************************************ 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:35.694 nvmf_trace.0 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.694 rmmod nvme_tcp 00:30:35.694 rmmod nvme_fabrics 00:30:35.694 rmmod nvme_keyring 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 255042 ']' 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 255042 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 255042 ']' 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 255042 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 255042 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:35.694 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:35.695 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 255042' 00:30:35.695 killing process with pid 255042 00:30:35.695 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 255042 00:30:35.695 04:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 255042 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.952 04:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.485 00:30:38.485 real 0m41.899s 00:30:38.485 user 0m52.303s 00:30:38.485 sys 0m10.066s 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.485 ************************************ 00:30:38.485 END TEST nvmf_lvs_grow 00:30:38.485 ************************************ 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:38.485 ************************************ 00:30:38.485 START TEST nvmf_bdev_io_wait 00:30:38.485 ************************************ 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:38.485 * Looking for test storage... 00:30:38.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.485 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:38.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.486 --rc genhtml_branch_coverage=1 00:30:38.486 --rc genhtml_function_coverage=1 00:30:38.486 --rc genhtml_legend=1 00:30:38.486 --rc geninfo_all_blocks=1 00:30:38.486 --rc geninfo_unexecuted_blocks=1 00:30:38.486 00:30:38.486 ' 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:38.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.486 --rc genhtml_branch_coverage=1 00:30:38.486 --rc genhtml_function_coverage=1 00:30:38.486 --rc genhtml_legend=1 00:30:38.486 --rc geninfo_all_blocks=1 00:30:38.486 --rc geninfo_unexecuted_blocks=1 00:30:38.486 00:30:38.486 ' 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:38.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.486 --rc genhtml_branch_coverage=1 00:30:38.486 --rc genhtml_function_coverage=1 00:30:38.486 --rc genhtml_legend=1 00:30:38.486 --rc geninfo_all_blocks=1 00:30:38.486 --rc geninfo_unexecuted_blocks=1 00:30:38.486 00:30:38.486 ' 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:38.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.486 --rc genhtml_branch_coverage=1 00:30:38.486 --rc genhtml_function_coverage=1 00:30:38.486 --rc genhtml_legend=1 00:30:38.486 --rc geninfo_all_blocks=1 00:30:38.486 --rc geninfo_unexecuted_blocks=1 00:30:38.486 00:30:38.486 ' 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.486 04:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:43.844 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:43.844 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:43.844 Found net devices under 0000:af:00.0: cvl_0_0 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.844 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:43.845 Found net devices under 0000:af:00.1: cvl_0_1 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:43.845 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.226 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:44.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:30:44.227 00:30:44.227 --- 10.0.0.2 ping statistics --- 00:30:44.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.227 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:30:44.227 00:30:44.227 --- 10.0.0.1 ping statistics --- 00:30:44.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.227 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=259021 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 259021 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 259021 ']' 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.227 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.227 [2024-12-10 04:17:43.390560] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:44.227 [2024-12-10 04:17:43.391444] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:44.227 [2024-12-10 04:17:43.391474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.227 [2024-12-10 04:17:43.468177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:44.496 [2024-12-10 04:17:43.510794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.496 [2024-12-10 04:17:43.510829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.496 [2024-12-10 04:17:43.510836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.496 [2024-12-10 04:17:43.510843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.496 [2024-12-10 04:17:43.510848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.496 [2024-12-10 04:17:43.512159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.496 [2024-12-10 04:17:43.512266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.496 [2024-12-10 04:17:43.512299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.496 [2024-12-10 04:17:43.512300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.496 [2024-12-10 04:17:43.512734] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 [2024-12-10 04:17:43.654805] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:44.496 [2024-12-10 04:17:43.655380] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:44.496 [2024-12-10 04:17:43.655820] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:44.496 [2024-12-10 04:17:43.655940] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 [2024-12-10 04:17:43.664968] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 Malloc0 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.496 [2024-12-10 04:17:43.741477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=259160 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=259162 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:44.496 { 00:30:44.496 "params": { 00:30:44.496 "name": "Nvme$subsystem", 00:30:44.496 "trtype": "$TEST_TRANSPORT", 00:30:44.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.496 "adrfam": "ipv4", 00:30:44.496 "trsvcid": "$NVMF_PORT", 00:30:44.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.496 "hdgst": ${hdgst:-false}, 00:30:44.496 "ddgst": ${ddgst:-false} 00:30:44.496 }, 00:30:44.496 "method": "bdev_nvme_attach_controller" 00:30:44.496 } 00:30:44.496 EOF 00:30:44.496 )") 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=259165 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:44.496 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:44.496 { 00:30:44.496 "params": { 00:30:44.496 "name": "Nvme$subsystem", 00:30:44.496 "trtype": "$TEST_TRANSPORT", 00:30:44.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.496 "adrfam": "ipv4", 00:30:44.497 "trsvcid": "$NVMF_PORT", 00:30:44.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.497 "hdgst": ${hdgst:-false}, 00:30:44.497 "ddgst": ${ddgst:-false} 00:30:44.497 }, 00:30:44.497 "method": "bdev_nvme_attach_controller" 00:30:44.497 } 00:30:44.497 EOF 00:30:44.497 )") 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=259169 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:44.497 { 00:30:44.497 "params": { 00:30:44.497 "name": "Nvme$subsystem", 00:30:44.497 "trtype": "$TEST_TRANSPORT", 00:30:44.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.497 "adrfam": "ipv4", 00:30:44.497 "trsvcid": "$NVMF_PORT", 00:30:44.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.497 "hdgst": ${hdgst:-false}, 00:30:44.497 "ddgst": ${ddgst:-false} 00:30:44.497 }, 00:30:44.497 "method": "bdev_nvme_attach_controller" 00:30:44.497 } 00:30:44.497 EOF 00:30:44.497 )") 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:44.497 { 00:30:44.497 "params": { 00:30:44.497 "name": "Nvme$subsystem", 00:30:44.497 "trtype": "$TEST_TRANSPORT", 00:30:44.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.497 "adrfam": "ipv4", 00:30:44.497 "trsvcid": "$NVMF_PORT", 00:30:44.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.497 "hdgst": ${hdgst:-false}, 00:30:44.497 "ddgst": ${ddgst:-false} 00:30:44.497 }, 00:30:44.497 "method": "bdev_nvme_attach_controller" 00:30:44.497 } 00:30:44.497 EOF 00:30:44.497 )") 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 259160 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:44.497 "params": { 00:30:44.497 "name": "Nvme1", 00:30:44.497 "trtype": "tcp", 00:30:44.497 "traddr": "10.0.0.2", 00:30:44.497 "adrfam": "ipv4", 00:30:44.497 "trsvcid": "4420", 00:30:44.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.497 "hdgst": false, 00:30:44.497 "ddgst": false 00:30:44.497 }, 00:30:44.497 "method": "bdev_nvme_attach_controller" 00:30:44.497 }' 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:44.497 "params": { 00:30:44.497 "name": "Nvme1", 00:30:44.497 "trtype": "tcp", 00:30:44.497 "traddr": "10.0.0.2", 00:30:44.497 "adrfam": "ipv4", 00:30:44.497 "trsvcid": "4420", 00:30:44.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.497 "hdgst": false, 00:30:44.497 "ddgst": false 00:30:44.497 }, 00:30:44.497 "method": "bdev_nvme_attach_controller" 00:30:44.497 }' 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:44.497 "params": { 00:30:44.497 "name": "Nvme1", 00:30:44.497 "trtype": "tcp", 00:30:44.497 "traddr": "10.0.0.2", 00:30:44.497 "adrfam": "ipv4", 00:30:44.497 "trsvcid": "4420", 00:30:44.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.497 "hdgst": false, 00:30:44.497 "ddgst": false 00:30:44.497 }, 00:30:44.497 "method": "bdev_nvme_attach_controller" 00:30:44.497 }' 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:44.497 04:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:44.497 "params": { 00:30:44.497 "name": "Nvme1", 00:30:44.497 "trtype": "tcp", 00:30:44.497 "traddr": "10.0.0.2", 00:30:44.497 "adrfam": "ipv4", 00:30:44.497 "trsvcid": "4420", 00:30:44.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.497 "hdgst": false, 00:30:44.497 "ddgst": false 00:30:44.497 }, 00:30:44.497 "method": "bdev_nvme_attach_controller" 00:30:44.497 }' 00:30:44.756 [2024-12-10 04:17:43.794773] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:44.756 [2024-12-10 04:17:43.794830] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:44.756 [2024-12-10 04:17:43.796403] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:44.756 [2024-12-10 04:17:43.796450] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:44.756 [2024-12-10 04:17:43.797213] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:44.756 [2024-12-10 04:17:43.797259] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:44.756 [2024-12-10 04:17:43.799462] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:44.756 [2024-12-10 04:17:43.799517] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:44.756 [2024-12-10 04:17:43.990849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.756 [2024-12-10 04:17:44.035898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:45.015 [2024-12-10 04:17:44.083919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.015 [2024-12-10 04:17:44.129198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:45.015 [2024-12-10 04:17:44.184527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.015 [2024-12-10 04:17:44.237950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.015 [2024-12-10 04:17:44.243734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:45.015 [2024-12-10 04:17:44.279584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:45.274 Running I/O for 1 seconds... 00:30:45.274 Running I/O for 1 seconds... 00:30:45.274 Running I/O for 1 seconds... 00:30:45.532 Running I/O for 1 seconds... 00:30:46.467 14576.00 IOPS, 56.94 MiB/s 00:30:46.467 Latency(us) 00:30:46.467 [2024-12-10T03:17:45.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.467 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:46.467 Nvme1n1 : 1.01 14638.00 57.18 0.00 0.00 8720.73 3464.05 10111.27 00:30:46.467 [2024-12-10T03:17:45.753Z] =================================================================================================================== 00:30:46.467 [2024-12-10T03:17:45.753Z] Total : 14638.00 57.18 0.00 0.00 8720.73 3464.05 10111.27 00:30:46.467 7041.00 IOPS, 27.50 MiB/s 00:30:46.467 Latency(us) 00:30:46.467 [2024-12-10T03:17:45.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.467 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:46.467 Nvme1n1 : 1.01 7093.72 27.71 0.00 0.00 17950.70 4525.10 26838.55 00:30:46.467 [2024-12-10T03:17:45.753Z] =================================================================================================================== 00:30:46.467 [2024-12-10T03:17:45.753Z] Total : 7093.72 27.71 0.00 0.00 17950.70 4525.10 26838.55 00:30:46.467 241952.00 IOPS, 945.12 MiB/s 00:30:46.467 Latency(us) 00:30:46.467 [2024-12-10T03:17:45.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.467 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:46.467 Nvme1n1 : 1.00 241587.95 943.70 0.00 0.00 527.40 222.35 1490.16 00:30:46.467 [2024-12-10T03:17:45.753Z] =================================================================================================================== 00:30:46.467 [2024-12-10T03:17:45.753Z] Total : 241587.95 943.70 0.00 0.00 527.40 222.35 1490.16 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 259162 00:30:46.467 7305.00 IOPS, 28.54 MiB/s 00:30:46.467 Latency(us) 00:30:46.467 [2024-12-10T03:17:45.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.467 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:46.467 Nvme1n1 : 1.01 7399.67 28.90 0.00 0.00 17252.01 4119.41 34952.53 00:30:46.467 [2024-12-10T03:17:45.753Z] =================================================================================================================== 00:30:46.467 [2024-12-10T03:17:45.753Z] Total : 7399.67 28.90 0.00 0.00 17252.01 4119.41 34952.53 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 259165 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 259169 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:46.467 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:46.467 rmmod nvme_tcp 00:30:46.726 rmmod nvme_fabrics 00:30:46.726 rmmod nvme_keyring 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 259021 ']' 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 259021 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 259021 ']' 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 259021 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 259021 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 259021' 00:30:46.726 killing process with pid 259021 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 259021 00:30:46.726 04:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 259021 00:30:46.985 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:46.985 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:46.985 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:46.985 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:46.985 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:46.985 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:46.985 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:46.985 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:46.985 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:46.985 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.986 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.986 04:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.890 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:48.890 00:30:48.890 real 0m10.837s 00:30:48.890 user 0m15.799s 00:30:48.890 sys 0m6.467s 00:30:48.890 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.890 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:48.890 ************************************ 00:30:48.890 END TEST nvmf_bdev_io_wait 00:30:48.890 ************************************ 00:30:48.890 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:48.890 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:48.890 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.890 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:49.150 ************************************ 00:30:49.150 START TEST nvmf_queue_depth 00:30:49.150 ************************************ 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:49.150 * Looking for test storage... 00:30:49.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:49.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.150 --rc genhtml_branch_coverage=1 00:30:49.150 --rc genhtml_function_coverage=1 00:30:49.150 --rc genhtml_legend=1 00:30:49.150 --rc geninfo_all_blocks=1 00:30:49.150 --rc geninfo_unexecuted_blocks=1 00:30:49.150 00:30:49.150 ' 00:30:49.150 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:49.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.150 --rc genhtml_branch_coverage=1 00:30:49.150 --rc genhtml_function_coverage=1 00:30:49.150 --rc genhtml_legend=1 00:30:49.150 --rc geninfo_all_blocks=1 00:30:49.151 --rc geninfo_unexecuted_blocks=1 00:30:49.151 00:30:49.151 ' 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:49.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.151 --rc genhtml_branch_coverage=1 00:30:49.151 --rc genhtml_function_coverage=1 00:30:49.151 --rc genhtml_legend=1 00:30:49.151 --rc geninfo_all_blocks=1 00:30:49.151 --rc geninfo_unexecuted_blocks=1 00:30:49.151 00:30:49.151 ' 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:49.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.151 --rc genhtml_branch_coverage=1 00:30:49.151 --rc genhtml_function_coverage=1 00:30:49.151 --rc genhtml_legend=1 00:30:49.151 --rc geninfo_all_blocks=1 00:30:49.151 --rc geninfo_unexecuted_blocks=1 00:30:49.151 00:30:49.151 ' 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:49.151 04:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:55.721 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:55.722 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:55.722 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:55.722 Found net devices under 0000:af:00.0: cvl_0_0 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:55.722 Found net devices under 0000:af:00.1: cvl_0_1 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:55.722 04:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:55.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:30:55.722 00:30:55.722 --- 10.0.0.2 ping statistics --- 00:30:55.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.722 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:55.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:30:55.722 00:30:55.722 --- 10.0.0.1 ping statistics --- 00:30:55.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.722 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:55.722 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=262975 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 262975 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 262975 ']' 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.723 04:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.723 [2024-12-10 04:17:54.283281] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:55.723 [2024-12-10 04:17:54.284132] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:55.723 [2024-12-10 04:17:54.284163] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.723 [2024-12-10 04:17:54.365230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.723 [2024-12-10 04:17:54.405541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.723 [2024-12-10 04:17:54.405576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.723 [2024-12-10 04:17:54.405584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.723 [2024-12-10 04:17:54.405590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.723 [2024-12-10 04:17:54.405595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.723 [2024-12-10 04:17:54.406057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.723 [2024-12-10 04:17:54.473477] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:55.723 [2024-12-10 04:17:54.473683] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.985 [2024-12-10 04:17:55.166724] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.985 Malloc0 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.985 [2024-12-10 04:17:55.238924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=263114 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 263114 /var/tmp/bdevperf.sock 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 263114 ']' 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:55.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.985 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:56.243 [2024-12-10 04:17:55.293006] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:30:56.243 [2024-12-10 04:17:55.293061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263114 ] 00:30:56.243 [2024-12-10 04:17:55.368628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.243 [2024-12-10 04:17:55.409269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.243 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.243 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:56.243 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.243 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.243 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:56.501 NVMe0n1 00:30:56.501 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.501 04:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:56.759 Running I/O for 10 seconds... 00:30:58.629 12007.00 IOPS, 46.90 MiB/s [2024-12-10T03:17:58.851Z] 12269.00 IOPS, 47.93 MiB/s [2024-12-10T03:18:00.227Z] 12358.33 IOPS, 48.27 MiB/s [2024-12-10T03:18:01.163Z] 12512.00 IOPS, 48.88 MiB/s [2024-12-10T03:18:02.097Z] 12476.20 IOPS, 48.74 MiB/s [2024-12-10T03:18:03.033Z] 12485.83 IOPS, 48.77 MiB/s [2024-12-10T03:18:03.969Z] 12566.14 IOPS, 49.09 MiB/s [2024-12-10T03:18:04.905Z] 12584.75 IOPS, 49.16 MiB/s [2024-12-10T03:18:06.280Z] 12623.56 IOPS, 49.31 MiB/s [2024-12-10T03:18:06.280Z] 12614.70 IOPS, 49.28 MiB/s 00:31:06.994 Latency(us) 00:31:06.994 [2024-12-10T03:18:06.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.994 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:06.994 Verification LBA range: start 0x0 length 0x4000 00:31:06.994 NVMe0n1 : 10.05 12655.56 49.44 0.00 0.00 80641.37 7833.11 52428.80 00:31:06.994 [2024-12-10T03:18:06.280Z] =================================================================================================================== 00:31:06.994 [2024-12-10T03:18:06.280Z] Total : 12655.56 49.44 0.00 0.00 80641.37 7833.11 52428.80 00:31:06.994 { 00:31:06.994 "results": [ 00:31:06.994 { 00:31:06.994 "job": "NVMe0n1", 00:31:06.994 "core_mask": "0x1", 00:31:06.994 "workload": "verify", 00:31:06.994 "status": "finished", 00:31:06.994 "verify_range": { 00:31:06.994 "start": 0, 00:31:06.994 "length": 16384 00:31:06.994 }, 00:31:06.994 "queue_depth": 1024, 00:31:06.994 "io_size": 4096, 00:31:06.994 "runtime": 10.048625, 00:31:06.994 "iops": 12655.562328179229, 00:31:06.994 "mibps": 49.43579034445011, 00:31:06.994 "io_failed": 0, 00:31:06.994 "io_timeout": 0, 00:31:06.994 "avg_latency_us": 80641.37080956239, 00:31:06.994 "min_latency_us": 7833.112380952381, 00:31:06.994 "max_latency_us": 52428.8 00:31:06.994 } 00:31:06.994 ], 00:31:06.994 "core_count": 1 00:31:06.994 } 00:31:06.994 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 263114 00:31:06.994 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 263114 ']' 00:31:06.994 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 263114 00:31:06.994 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:06.994 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.994 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263114 00:31:06.994 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:06.994 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:06.994 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263114' 00:31:06.994 killing process with pid 263114 00:31:06.994 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 263114 00:31:06.994 Received shutdown signal, test time was about 10.000000 seconds 00:31:06.994 00:31:06.994 Latency(us) 00:31:06.994 [2024-12-10T03:18:06.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.994 [2024-12-10T03:18:06.280Z] =================================================================================================================== 00:31:06.994 [2024-12-10T03:18:06.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:06.994 04:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 263114 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.994 rmmod nvme_tcp 00:31:06.994 rmmod nvme_fabrics 00:31:06.994 rmmod nvme_keyring 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 262975 ']' 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 262975 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 262975 ']' 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 262975 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 262975 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 262975' 00:31:06.994 killing process with pid 262975 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 262975 00:31:06.994 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 262975 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.253 04:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:09.789 00:31:09.789 real 0m20.333s 00:31:09.789 user 0m23.038s 00:31:09.789 sys 0m6.265s 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:09.789 ************************************ 00:31:09.789 END TEST nvmf_queue_depth 00:31:09.789 ************************************ 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:09.789 ************************************ 00:31:09.789 START TEST nvmf_target_multipath 00:31:09.789 ************************************ 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:09.789 * Looking for test storage... 00:31:09.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.789 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:09.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.790 --rc genhtml_branch_coverage=1 00:31:09.790 --rc genhtml_function_coverage=1 00:31:09.790 --rc genhtml_legend=1 00:31:09.790 --rc geninfo_all_blocks=1 00:31:09.790 --rc geninfo_unexecuted_blocks=1 00:31:09.790 00:31:09.790 ' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:09.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.790 --rc genhtml_branch_coverage=1 00:31:09.790 --rc genhtml_function_coverage=1 00:31:09.790 --rc genhtml_legend=1 00:31:09.790 --rc geninfo_all_blocks=1 00:31:09.790 --rc geninfo_unexecuted_blocks=1 00:31:09.790 00:31:09.790 ' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:09.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.790 --rc genhtml_branch_coverage=1 00:31:09.790 --rc genhtml_function_coverage=1 00:31:09.790 --rc genhtml_legend=1 00:31:09.790 --rc geninfo_all_blocks=1 00:31:09.790 --rc geninfo_unexecuted_blocks=1 00:31:09.790 00:31:09.790 ' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:09.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.790 --rc genhtml_branch_coverage=1 00:31:09.790 --rc genhtml_function_coverage=1 00:31:09.790 --rc genhtml_legend=1 00:31:09.790 --rc geninfo_all_blocks=1 00:31:09.790 --rc geninfo_unexecuted_blocks=1 00:31:09.790 00:31:09.790 ' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:09.790 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.791 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:09.791 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:09.791 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:09.791 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.791 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.791 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.791 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:09.791 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:09.791 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:09.791 04:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:16.360 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:16.361 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:16.361 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:16.361 Found net devices under 0000:af:00.0: cvl_0_0 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:16.361 Found net devices under 0000:af:00.1: cvl_0_1 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:16.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:31:16.361 00:31:16.361 --- 10.0.0.2 ping statistics --- 00:31:16.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.361 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:31:16.361 00:31:16.361 --- 10.0.0.1 ping statistics --- 00:31:16.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.361 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:16.361 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:16.362 only one NIC for nvmf test 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:16.362 rmmod nvme_tcp 00:31:16.362 rmmod nvme_fabrics 00:31:16.362 rmmod nvme_keyring 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.362 04:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.740 00:31:17.740 real 0m8.263s 00:31:17.740 user 0m1.778s 00:31:17.740 sys 0m4.486s 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:17.740 ************************************ 00:31:17.740 END TEST nvmf_target_multipath 00:31:17.740 ************************************ 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:17.740 ************************************ 00:31:17.740 START TEST nvmf_zcopy 00:31:17.740 ************************************ 00:31:17.740 04:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:17.740 * Looking for test storage... 00:31:17.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:17.740 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:17.740 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:17.740 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.999 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:18.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.000 --rc genhtml_branch_coverage=1 00:31:18.000 --rc genhtml_function_coverage=1 00:31:18.000 --rc genhtml_legend=1 00:31:18.000 --rc geninfo_all_blocks=1 00:31:18.000 --rc geninfo_unexecuted_blocks=1 00:31:18.000 00:31:18.000 ' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:18.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.000 --rc genhtml_branch_coverage=1 00:31:18.000 --rc genhtml_function_coverage=1 00:31:18.000 --rc genhtml_legend=1 00:31:18.000 --rc geninfo_all_blocks=1 00:31:18.000 --rc geninfo_unexecuted_blocks=1 00:31:18.000 00:31:18.000 ' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:18.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.000 --rc genhtml_branch_coverage=1 00:31:18.000 --rc genhtml_function_coverage=1 00:31:18.000 --rc genhtml_legend=1 00:31:18.000 --rc geninfo_all_blocks=1 00:31:18.000 --rc geninfo_unexecuted_blocks=1 00:31:18.000 00:31:18.000 ' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:18.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.000 --rc genhtml_branch_coverage=1 00:31:18.000 --rc genhtml_function_coverage=1 00:31:18.000 --rc genhtml_legend=1 00:31:18.000 --rc geninfo_all_blocks=1 00:31:18.000 --rc geninfo_unexecuted_blocks=1 00:31:18.000 00:31:18.000 ' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:18.000 04:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:24.568 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:24.569 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:24.569 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:24.569 Found net devices under 0000:af:00.0: cvl_0_0 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:24.569 Found net devices under 0000:af:00.1: cvl_0_1 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:24.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:31:24.569 00:31:24.569 --- 10.0.0.2 ping statistics --- 00:31:24.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.569 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:31:24.569 00:31:24.569 --- 10.0.0.1 ping statistics --- 00:31:24.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.569 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:24.569 04:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.569 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=272210 00:31:24.569 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:24.569 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 272210 00:31:24.569 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 272210 ']' 00:31:24.569 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.569 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:24.569 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.569 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:24.569 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.569 [2024-12-10 04:18:23.055424] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:24.569 [2024-12-10 04:18:23.056385] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:31:24.569 [2024-12-10 04:18:23.056423] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.569 [2024-12-10 04:18:23.133857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.569 [2024-12-10 04:18:23.174240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.569 [2024-12-10 04:18:23.174271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.570 [2024-12-10 04:18:23.174280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.570 [2024-12-10 04:18:23.174287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.570 [2024-12-10 04:18:23.174292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.570 [2024-12-10 04:18:23.174747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.570 [2024-12-10 04:18:23.241468] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:24.570 [2024-12-10 04:18:23.241661] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.570 [2024-12-10 04:18:23.319416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.570 [2024-12-10 04:18:23.347718] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.570 malloc0 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:24.570 { 00:31:24.570 "params": { 00:31:24.570 "name": "Nvme$subsystem", 00:31:24.570 "trtype": "$TEST_TRANSPORT", 00:31:24.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.570 "adrfam": "ipv4", 00:31:24.570 "trsvcid": "$NVMF_PORT", 00:31:24.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.570 "hdgst": ${hdgst:-false}, 00:31:24.570 "ddgst": ${ddgst:-false} 00:31:24.570 }, 00:31:24.570 "method": "bdev_nvme_attach_controller" 00:31:24.570 } 00:31:24.570 EOF 00:31:24.570 )") 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:24.570 04:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:24.570 "params": { 00:31:24.570 "name": "Nvme1", 00:31:24.570 "trtype": "tcp", 00:31:24.570 "traddr": "10.0.0.2", 00:31:24.570 "adrfam": "ipv4", 00:31:24.570 "trsvcid": "4420", 00:31:24.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:24.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:24.570 "hdgst": false, 00:31:24.570 "ddgst": false 00:31:24.570 }, 00:31:24.570 "method": "bdev_nvme_attach_controller" 00:31:24.570 }' 00:31:24.570 [2024-12-10 04:18:23.446591] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:31:24.570 [2024-12-10 04:18:23.446646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid272233 ] 00:31:24.570 [2024-12-10 04:18:23.520129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.570 [2024-12-10 04:18:23.560157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.570 Running I/O for 10 seconds... 00:31:26.883 8562.00 IOPS, 66.89 MiB/s [2024-12-10T03:18:27.105Z] 8609.00 IOPS, 67.26 MiB/s [2024-12-10T03:18:28.041Z] 8630.00 IOPS, 67.42 MiB/s [2024-12-10T03:18:28.978Z] 8635.50 IOPS, 67.46 MiB/s [2024-12-10T03:18:29.914Z] 8647.00 IOPS, 67.55 MiB/s [2024-12-10T03:18:30.851Z] 8647.67 IOPS, 67.56 MiB/s [2024-12-10T03:18:31.787Z] 8618.71 IOPS, 67.33 MiB/s [2024-12-10T03:18:33.164Z] 8625.38 IOPS, 67.39 MiB/s [2024-12-10T03:18:34.100Z] 8634.22 IOPS, 67.45 MiB/s [2024-12-10T03:18:34.101Z] 8642.10 IOPS, 67.52 MiB/s 00:31:34.815 Latency(us) 00:31:34.815 [2024-12-10T03:18:34.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.815 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:34.815 Verification LBA range: start 0x0 length 0x1000 00:31:34.815 Nvme1n1 : 10.01 8643.78 67.53 0.00 0.00 14765.32 1911.47 21096.35 00:31:34.815 [2024-12-10T03:18:34.101Z] =================================================================================================================== 00:31:34.815 [2024-12-10T03:18:34.101Z] Total : 8643.78 67.53 0.00 0.00 14765.32 1911.47 21096.35 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=273925 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:34.815 { 00:31:34.815 "params": { 00:31:34.815 "name": "Nvme$subsystem", 00:31:34.815 "trtype": "$TEST_TRANSPORT", 00:31:34.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.815 "adrfam": "ipv4", 00:31:34.815 "trsvcid": "$NVMF_PORT", 00:31:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.815 "hdgst": ${hdgst:-false}, 00:31:34.815 "ddgst": ${ddgst:-false} 00:31:34.815 }, 00:31:34.815 "method": "bdev_nvme_attach_controller" 00:31:34.815 } 00:31:34.815 EOF 00:31:34.815 )") 00:31:34.815 [2024-12-10 04:18:33.951077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:33.951109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:34.815 04:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:34.815 "params": { 00:31:34.815 "name": "Nvme1", 00:31:34.815 "trtype": "tcp", 00:31:34.815 "traddr": "10.0.0.2", 00:31:34.815 "adrfam": "ipv4", 00:31:34.815 "trsvcid": "4420", 00:31:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:34.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:34.815 "hdgst": false, 00:31:34.815 "ddgst": false 00:31:34.815 }, 00:31:34.815 "method": "bdev_nvme_attach_controller" 00:31:34.815 }' 00:31:34.815 [2024-12-10 04:18:33.963045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:33.963060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 [2024-12-10 04:18:33.975045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:33.975057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 [2024-12-10 04:18:33.987042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:33.987053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 [2024-12-10 04:18:33.992386] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:31:34.815 [2024-12-10 04:18:33.992429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273925 ] 00:31:34.815 [2024-12-10 04:18:33.999041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:33.999053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 [2024-12-10 04:18:34.011043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:34.011054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 [2024-12-10 04:18:34.023042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:34.023054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 [2024-12-10 04:18:34.035042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:34.035053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 [2024-12-10 04:18:34.047041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:34.047052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 [2024-12-10 04:18:34.059041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:34.059052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 [2024-12-10 04:18:34.066874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.815 [2024-12-10 04:18:34.071044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:34.071060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 [2024-12-10 04:18:34.083044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:34.083059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.815 [2024-12-10 04:18:34.095043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.815 [2024-12-10 04:18:34.095054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.107042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.107055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.107651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.074 [2024-12-10 04:18:34.119048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.119063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.131049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.131069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.143046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.143059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.155044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.155057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.167047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.167063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.179045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.179057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.191055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.191073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.203051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.203068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.215048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.215063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.227050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.227066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.239043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.239053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.251043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.251053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.263055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.263070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.275047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.275062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.287049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.287065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.074 [2024-12-10 04:18:34.299048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.074 [2024-12-10 04:18:34.299065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.075 Running I/O for 5 seconds... 00:31:35.075 [2024-12-10 04:18:34.314784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.075 [2024-12-10 04:18:34.314804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.075 [2024-12-10 04:18:34.328962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.075 [2024-12-10 04:18:34.328981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.075 [2024-12-10 04:18:34.343469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.075 [2024-12-10 04:18:34.343487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.075 [2024-12-10 04:18:34.354944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.075 [2024-12-10 04:18:34.354963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.333 [2024-12-10 04:18:34.369361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.333 [2024-12-10 04:18:34.369380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.333 [2024-12-10 04:18:34.384106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.333 [2024-12-10 04:18:34.384125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.398896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.398916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.409972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.409991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.424863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.424883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.439374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.439394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.454983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.455003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.467948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.467968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.482826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.482845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.494087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.494106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.509015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.509035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.523616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.523635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.538899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.538918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.551495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.551513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.566364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.566382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.580551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.580570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.595134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.595153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.334 [2024-12-10 04:18:34.607531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.334 [2024-12-10 04:18:34.607550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.622792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.622812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.636243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.636262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.647337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.647355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.661093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.661111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.675507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.675526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.691070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.691090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.704857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.704876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.719032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.719051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.731850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.731869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.747011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.747029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.761191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.761210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.775844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.775866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.791094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.791114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.802369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.802388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.817132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.817151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.831058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.831077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.843662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.843680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.856764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.856783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.593 [2024-12-10 04:18:34.871299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.593 [2024-12-10 04:18:34.871319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:34.881562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:34.881582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:34.895869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:34.895888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:34.910956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:34.910975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:34.924304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:34.924323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:34.938851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:34.938870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:34.952669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:34.952689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:34.967039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:34.967058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:34.980001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:34.980019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:34.994744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:34.994762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:35.008989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:35.009008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:35.022917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:35.022936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:35.034851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:35.034870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:35.048771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:35.048790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:35.063448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:35.063466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:35.078997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:35.079021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:35.092447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:35.092467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:35.107070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:35.107090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:35.119746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:35.119767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.851 [2024-12-10 04:18:35.132816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.851 [2024-12-10 04:18:35.132837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.147578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.147598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.162665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.162685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.175808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.175828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.188882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.188902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.203719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.203740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.218907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.218929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.232762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.232787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.247826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.247849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.262557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.262576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.276865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.276885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.291362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.291381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 17002.00 IOPS, 132.83 MiB/s [2024-12-10T03:18:35.395Z] [2024-12-10 04:18:35.307599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.307618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.322938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.322960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.337397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.337416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.351219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.351243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.363523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.363542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.109 [2024-12-10 04:18:35.377055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.109 [2024-12-10 04:18:35.377075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.367 [2024-12-10 04:18:35.391851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.367 [2024-12-10 04:18:35.391871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.367 [2024-12-10 04:18:35.406546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.367 [2024-12-10 04:18:35.406566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.367 [2024-12-10 04:18:35.420821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.367 [2024-12-10 04:18:35.420840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.367 [2024-12-10 04:18:35.435292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.367 [2024-12-10 04:18:35.435311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.447910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.447929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.462902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.462921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.475912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.475931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.486837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.486856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.500883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.500902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.515227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.515247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.525824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.525844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.540261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.540279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.554895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.554914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.568835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.568854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.583460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.583477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.599366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.599384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.611698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.611725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.624402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.624421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.368 [2024-12-10 04:18:35.638773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.368 [2024-12-10 04:18:35.638792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.652269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.652287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.663598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.663616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.676934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.676953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.691533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.691551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.706910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.706929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.721119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.721138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.735952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.735971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.751354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.751372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.767843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.767861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.783589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.783608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.798953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.798972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.811675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.811693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.824298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.824317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.839385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.839403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.855144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.855162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.867504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.867523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.880866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.880886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.627 [2024-12-10 04:18:35.895601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.627 [2024-12-10 04:18:35.895620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:35.910886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:35.910905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:35.924113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:35.924131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:35.938995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:35.939014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:35.952611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:35.952631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:35.967001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:35.967022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:35.980794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:35.980812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:35.995406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:35.995429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.011027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.011045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.024848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.024867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.039573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.039592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.055425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.055443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.071205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.071228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.084163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.084187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.095020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.095039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.108973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.108992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.123740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.123759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.139307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.139326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.152401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.152423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.886 [2024-12-10 04:18:36.167275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.886 [2024-12-10 04:18:36.167294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.178310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.178330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.192693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.192712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.207132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.207153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.218289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.218309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.233053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.233072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.247924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.247943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.263126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.263144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.276292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.276311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.290872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.290891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.301957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.301976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 16982.00 IOPS, 132.67 MiB/s [2024-12-10T03:18:36.431Z] [2024-12-10 04:18:36.316390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.316409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.330637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.330655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.344001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.344020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.359386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.145 [2024-12-10 04:18:36.359404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.145 [2024-12-10 04:18:36.372636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.146 [2024-12-10 04:18:36.372655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.146 [2024-12-10 04:18:36.387382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.146 [2024-12-10 04:18:36.387400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.146 [2024-12-10 04:18:36.402489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.146 [2024-12-10 04:18:36.402513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.146 [2024-12-10 04:18:36.417120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.146 [2024-12-10 04:18:36.417139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.431812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.431831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.447485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.447504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.463371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.463389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.476109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.476129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.491140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.491159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.502233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.502252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.516709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.516729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.531221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.531240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.543934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.543954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.558921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.558942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.572006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.572025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.587141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.587160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.598485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.598504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.612853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.612872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.627412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.627431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.639736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.639754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.654509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.654528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.667219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.667243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.404 [2024-12-10 04:18:36.681376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.404 [2024-12-10 04:18:36.681396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.695748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.695766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.711009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.711029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.724303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.724323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.738915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.738935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.752067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.752087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.767070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.767090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.780497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.780516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.795343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.795362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.811344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.811363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.827454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.827472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.843237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.843258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.854313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.854333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.869069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.869088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.883724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.883743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.898623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.898642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.913068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.913087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.927900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.927918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.663 [2024-12-10 04:18:36.942695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.663 [2024-12-10 04:18:36.942720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.922 [2024-12-10 04:18:36.955825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.922 [2024-12-10 04:18:36.955844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.922 [2024-12-10 04:18:36.969162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.922 [2024-12-10 04:18:36.969188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.922 [2024-12-10 04:18:36.983699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.922 [2024-12-10 04:18:36.983718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.922 [2024-12-10 04:18:36.998397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.922 [2024-12-10 04:18:36.998416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.922 [2024-12-10 04:18:37.013174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.922 [2024-12-10 04:18:37.013192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.922 [2024-12-10 04:18:37.027576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.922 [2024-12-10 04:18:37.027595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.922 [2024-12-10 04:18:37.043522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.922 [2024-12-10 04:18:37.043541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.922 [2024-12-10 04:18:37.059128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.922 [2024-12-10 04:18:37.059148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.922 [2024-12-10 04:18:37.072425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.923 [2024-12-10 04:18:37.072445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.923 [2024-12-10 04:18:37.086952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.923 [2024-12-10 04:18:37.086971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.923 [2024-12-10 04:18:37.100419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.923 [2024-12-10 04:18:37.100438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.923 [2024-12-10 04:18:37.115351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.923 [2024-12-10 04:18:37.115369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.923 [2024-12-10 04:18:37.131190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.923 [2024-12-10 04:18:37.131210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.923 [2024-12-10 04:18:37.144743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.923 [2024-12-10 04:18:37.144762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.923 [2024-12-10 04:18:37.159923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.923 [2024-12-10 04:18:37.159942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.923 [2024-12-10 04:18:37.175386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.923 [2024-12-10 04:18:37.175405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.923 [2024-12-10 04:18:37.191276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.923 [2024-12-10 04:18:37.191297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.923 [2024-12-10 04:18:37.201632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.923 [2024-12-10 04:18:37.201652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.216619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.216644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.230952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.230971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.244003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.244023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.258932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.258952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.272655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.272673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.287461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.287479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.303380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.303399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 16932.67 IOPS, 132.29 MiB/s [2024-12-10T03:18:37.470Z] [2024-12-10 04:18:37.314719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.314737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.328773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.328792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.342943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.342962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.355727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.355745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.370733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.370752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.384618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.384637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.399511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.399529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.414754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.184 [2024-12-10 04:18:37.414773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.184 [2024-12-10 04:18:37.428663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.185 [2024-12-10 04:18:37.428682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.185 [2024-12-10 04:18:37.443555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.185 [2024-12-10 04:18:37.443572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.185 [2024-12-10 04:18:37.458726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.185 [2024-12-10 04:18:37.458744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.473293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.473313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.488225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.488245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.502975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.502995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.515869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.515887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.530993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.531012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.542958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.542978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.557051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.557070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.571551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.571569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.586868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.586888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.599898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.599917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.615615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.615633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.631277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.631296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.642259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.642278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.656593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.656611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.670734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.670753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.684750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.684769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.699687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.699706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.714999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.715019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.462 [2024-12-10 04:18:37.728576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.462 [2024-12-10 04:18:37.728595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.743310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.743329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.754666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.754685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.768787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.768806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.783620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.783639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.798961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.798981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.811663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.811681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.824283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.824302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.839098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.839117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.851554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.851572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.864990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.865009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.879799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.879817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.894155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.894179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.908931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.908949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.923590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.923609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.938439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.938458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.952844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.952864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.967370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.967390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.982874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.982896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:37.996994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:37.997015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.743 [2024-12-10 04:18:38.011858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.743 [2024-12-10 04:18:38.011878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.026851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.026871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.041135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.041155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.056331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.056350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.070933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.070953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.084857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.084876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.099502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.099520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.111783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.111802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.125005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.125024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.139452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.139471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.154249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.154268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.168949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.168968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.183214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.183236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.194570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.194590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.208860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.208880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.031 [2024-12-10 04:18:38.223309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.031 [2024-12-10 04:18:38.223329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.032 [2024-12-10 04:18:38.237297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.032 [2024-12-10 04:18:38.237317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.032 [2024-12-10 04:18:38.251482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.032 [2024-12-10 04:18:38.251500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.032 [2024-12-10 04:18:38.266739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.032 [2024-12-10 04:18:38.266758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.032 [2024-12-10 04:18:38.280883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.032 [2024-12-10 04:18:38.280912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.032 [2024-12-10 04:18:38.295496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.032 [2024-12-10 04:18:38.295515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.032 [2024-12-10 04:18:38.311009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.032 [2024-12-10 04:18:38.311029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.290 16927.25 IOPS, 132.24 MiB/s [2024-12-10T03:18:38.576Z] [2024-12-10 04:18:38.323638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.323657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.336868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.336886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.351211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.351230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.364491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.364510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.379173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.379192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.391948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.391967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.407278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.407297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.419786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.419806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.432924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.432943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.447541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.447560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.458622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.458642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.472892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.472910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.487862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.487880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.503117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.503136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.515760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.515779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.530672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.530691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.541861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.541885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.556741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.556759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.291 [2024-12-10 04:18:38.571367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.291 [2024-12-10 04:18:38.571386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.585194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.585214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.600127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.600146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.614851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.614870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.628486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.628504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.643211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.643230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.654370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.654389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.668919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.668938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.683417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.683436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.695640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.695659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.708130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.708148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.722865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.722884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.735696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.735714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.748770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.748788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.763466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.763485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.779124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.779142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.791606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.791625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.804983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.805007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.550 [2024-12-10 04:18:38.819666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.550 [2024-12-10 04:18:38.819684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.808 [2024-12-10 04:18:38.834993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.808 [2024-12-10 04:18:38.835013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.808 [2024-12-10 04:18:38.848986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.808 [2024-12-10 04:18:38.849005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.808 [2024-12-10 04:18:38.863227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:38.863247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:38.876803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:38.876822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:38.891275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:38.891293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:38.903573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:38.903591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:38.917182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:38.917201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:38.931934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:38.931953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:38.946635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:38.946654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:38.960632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:38.960652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:38.975097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:38.975116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:38.986399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:38.986418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:39.000296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:39.000315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:39.015410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:39.015429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:39.031092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:39.031111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:39.044365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:39.044384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:39.059562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:39.059580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:39.075005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:39.075023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.809 [2024-12-10 04:18:39.087949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.809 [2024-12-10 04:18:39.087968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.102867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.102886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.116650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.116669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.131306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.131324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.142444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.142463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.156840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.156858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.171302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.171321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.182742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.182761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.196650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.196669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.211408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.211426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.226902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.226922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.240190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.240209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.255081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.255101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.269196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.269215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.283707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.283725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.298605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.298626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.312634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.312654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 16918.60 IOPS, 132.18 MiB/s 00:31:40.067 Latency(us) 00:31:40.067 [2024-12-10T03:18:39.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.067 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:40.067 Nvme1n1 : 5.00 16930.05 132.27 0.00 0.00 7555.01 2012.89 13107.20 00:31:40.067 [2024-12-10T03:18:39.353Z] =================================================================================================================== 00:31:40.067 [2024-12-10T03:18:39.353Z] Total : 16930.05 132.27 0.00 0.00 7555.01 2012.89 13107.20 00:31:40.067 [2024-12-10 04:18:39.323048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.323066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.335048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.335065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.067 [2024-12-10 04:18:39.347059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.067 [2024-12-10 04:18:39.347076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 [2024-12-10 04:18:39.359051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.326 [2024-12-10 04:18:39.359067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 [2024-12-10 04:18:39.371048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.326 [2024-12-10 04:18:39.371062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 [2024-12-10 04:18:39.383050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.326 [2024-12-10 04:18:39.383064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 [2024-12-10 04:18:39.395047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.326 [2024-12-10 04:18:39.395063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 [2024-12-10 04:18:39.407044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.326 [2024-12-10 04:18:39.407059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 [2024-12-10 04:18:39.419047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.326 [2024-12-10 04:18:39.419061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 [2024-12-10 04:18:39.431042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.326 [2024-12-10 04:18:39.431054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 [2024-12-10 04:18:39.443042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.326 [2024-12-10 04:18:39.443053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 [2024-12-10 04:18:39.455047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.326 [2024-12-10 04:18:39.455058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 [2024-12-10 04:18:39.467043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.326 [2024-12-10 04:18:39.467054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 [2024-12-10 04:18:39.479043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.326 [2024-12-10 04:18:39.479054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (273925) - No such process 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 273925 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:40.326 delay0 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.326 04:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:40.326 [2024-12-10 04:18:39.587595] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:48.444 Initializing NVMe Controllers 00:31:48.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:48.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:48.444 Initialization complete. Launching workers. 00:31:48.444 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 263, failed: 24668 00:31:48.444 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 24828, failed to submit 103 00:31:48.444 success 24723, unsuccessful 105, failed 0 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.444 rmmod nvme_tcp 00:31:48.444 rmmod nvme_fabrics 00:31:48.444 rmmod nvme_keyring 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 272210 ']' 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 272210 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 272210 ']' 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 272210 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 272210 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 272210' 00:31:48.444 killing process with pid 272210 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 272210 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 272210 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.444 04:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.822 04:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.822 00:31:49.822 real 0m32.089s 00:31:49.822 user 0m41.180s 00:31:49.822 sys 0m13.136s 00:31:49.822 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.822 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:49.822 ************************************ 00:31:49.822 END TEST nvmf_zcopy 00:31:49.822 ************************************ 00:31:49.822 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:49.822 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:49.822 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.822 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.822 ************************************ 00:31:49.822 START TEST nvmf_nmic 00:31:49.822 ************************************ 00:31:49.822 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:50.082 * Looking for test storage... 00:31:50.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:50.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.082 --rc genhtml_branch_coverage=1 00:31:50.082 --rc genhtml_function_coverage=1 00:31:50.082 --rc genhtml_legend=1 00:31:50.082 --rc geninfo_all_blocks=1 00:31:50.082 --rc geninfo_unexecuted_blocks=1 00:31:50.082 00:31:50.082 ' 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:50.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.082 --rc genhtml_branch_coverage=1 00:31:50.082 --rc genhtml_function_coverage=1 00:31:50.082 --rc genhtml_legend=1 00:31:50.082 --rc geninfo_all_blocks=1 00:31:50.082 --rc geninfo_unexecuted_blocks=1 00:31:50.082 00:31:50.082 ' 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:50.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.082 --rc genhtml_branch_coverage=1 00:31:50.082 --rc genhtml_function_coverage=1 00:31:50.082 --rc genhtml_legend=1 00:31:50.082 --rc geninfo_all_blocks=1 00:31:50.082 --rc geninfo_unexecuted_blocks=1 00:31:50.082 00:31:50.082 ' 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:50.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.082 --rc genhtml_branch_coverage=1 00:31:50.082 --rc genhtml_function_coverage=1 00:31:50.082 --rc genhtml_legend=1 00:31:50.082 --rc geninfo_all_blocks=1 00:31:50.082 --rc geninfo_unexecuted_blocks=1 00:31:50.082 00:31:50.082 ' 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:50.082 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.083 04:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:56.656 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:56.656 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.656 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:56.656 Found net devices under 0000:af:00.0: cvl_0_0 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:56.657 Found net devices under 0000:af:00.1: cvl_0_1 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.657 04:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:31:56.657 00:31:56.657 --- 10.0.0.2 ping statistics --- 00:31:56.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.657 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:31:56.657 00:31:56.657 --- 10.0.0.1 ping statistics --- 00:31:56.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.657 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=279265 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 279265 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 279265 ']' 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.657 [2024-12-10 04:18:55.163006] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:56.657 [2024-12-10 04:18:55.163979] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:31:56.657 [2024-12-10 04:18:55.164016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.657 [2024-12-10 04:18:55.243011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:56.657 [2024-12-10 04:18:55.283527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.657 [2024-12-10 04:18:55.283565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.657 [2024-12-10 04:18:55.283573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.657 [2024-12-10 04:18:55.283581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.657 [2024-12-10 04:18:55.283586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.657 [2024-12-10 04:18:55.285045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.657 [2024-12-10 04:18:55.285149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.657 [2024-12-10 04:18:55.285258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.657 [2024-12-10 04:18:55.285258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:56.657 [2024-12-10 04:18:55.354498] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:56.657 [2024-12-10 04:18:55.355277] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:56.657 [2024-12-10 04:18:55.355561] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:56.657 [2024-12-10 04:18:55.355944] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:56.657 [2024-12-10 04:18:55.355982] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.657 [2024-12-10 04:18:55.434033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.657 Malloc0 00:31:56.657 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.658 [2024-12-10 04:18:55.518346] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:56.658 test case1: single bdev can't be used in multiple subsystems 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.658 [2024-12-10 04:18:55.549738] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:56.658 [2024-12-10 04:18:55.549761] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:56.658 [2024-12-10 04:18:55.549769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.658 request: 00:31:56.658 { 00:31:56.658 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:56.658 "namespace": { 00:31:56.658 "bdev_name": "Malloc0", 00:31:56.658 "no_auto_visible": false, 00:31:56.658 "hide_metadata": false 00:31:56.658 }, 00:31:56.658 "method": "nvmf_subsystem_add_ns", 00:31:56.658 "req_id": 1 00:31:56.658 } 00:31:56.658 Got JSON-RPC error response 00:31:56.658 response: 00:31:56.658 { 00:31:56.658 "code": -32602, 00:31:56.658 "message": "Invalid parameters" 00:31:56.658 } 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:56.658 Adding namespace failed - expected result. 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:56.658 test case2: host connect to nvmf target in multiple paths 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:56.658 [2024-12-10 04:18:55.561830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:56.658 04:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:56.917 04:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:56.917 04:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:56.917 04:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:56.917 04:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:56.917 04:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:58.818 04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:58.818 04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:58.818 04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:58.818 04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:58.818 04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:58.818 04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:58.818 04:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:58.818 [global] 00:31:58.818 thread=1 00:31:58.818 invalidate=1 00:31:58.818 rw=write 00:31:58.818 time_based=1 00:31:58.818 runtime=1 00:31:58.818 ioengine=libaio 00:31:58.818 direct=1 00:31:58.818 bs=4096 00:31:58.818 iodepth=1 00:31:58.818 norandommap=0 00:31:58.818 numjobs=1 00:31:58.818 00:31:58.818 verify_dump=1 00:31:58.818 verify_backlog=512 00:31:58.818 verify_state_save=0 00:31:58.818 do_verify=1 00:31:58.818 verify=crc32c-intel 00:31:58.818 [job0] 00:31:58.818 filename=/dev/nvme0n1 00:31:59.076 Could not set queue depth (nvme0n1) 00:31:59.333 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:59.333 fio-3.35 00:31:59.333 Starting 1 thread 00:32:00.269 00:32:00.269 job0: (groupid=0, jobs=1): err= 0: pid=280029: Tue Dec 10 04:18:59 2024 00:32:00.269 read: IOPS=687, BW=2751KiB/s (2818kB/s)(2812KiB/1022msec) 00:32:00.269 slat (nsec): min=6793, max=25256, avg=8107.07, stdev=2449.47 00:32:00.269 clat (usec): min=189, max=41128, avg=1207.98, stdev=6264.52 00:32:00.269 lat (usec): min=196, max=41139, avg=1216.08, stdev=6266.61 00:32:00.269 clat percentiles (usec): 00:32:00.269 | 1.00th=[ 192], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 198], 00:32:00.269 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 241], 00:32:00.269 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 258], 00:32:00.269 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:00.269 | 99.99th=[41157] 00:32:00.269 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:32:00.269 slat (nsec): min=9501, max=37588, avg=10627.11, stdev=1539.54 00:32:00.269 clat (usec): min=129, max=320, avg=146.97, stdev=29.43 00:32:00.269 lat (usec): min=139, max=330, avg=157.60, stdev=29.64 00:32:00.269 clat percentiles (usec): 00:32:00.269 | 1.00th=[ 131], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 135], 00:32:00.269 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 139], 00:32:00.269 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 241], 00:32:00.269 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 255], 99.95th=[ 322], 00:32:00.269 | 99.99th=[ 322] 00:32:00.269 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:32:00.269 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:00.269 lat (usec) : 250=93.86%, 500=5.15% 00:32:00.269 lat (msec) : 50=0.98% 00:32:00.269 cpu : usr=1.27%, sys=2.64%, ctx=1727, majf=0, minf=1 00:32:00.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:00.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.269 issued rwts: total=703,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:00.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:00.269 00:32:00.269 Run status group 0 (all jobs): 00:32:00.269 READ: bw=2751KiB/s (2818kB/s), 2751KiB/s-2751KiB/s (2818kB/s-2818kB/s), io=2812KiB (2879kB), run=1022-1022msec 00:32:00.269 WRITE: bw=4008KiB/s (4104kB/s), 4008KiB/s-4008KiB/s (4104kB/s-4104kB/s), io=4096KiB (4194kB), run=1022-1022msec 00:32:00.269 00:32:00.269 Disk stats (read/write): 00:32:00.269 nvme0n1: ios=750/1024, merge=0/0, ticks=732/141, in_queue=873, util=91.28% 00:32:00.269 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:00.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:00.528 rmmod nvme_tcp 00:32:00.528 rmmod nvme_fabrics 00:32:00.528 rmmod nvme_keyring 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 279265 ']' 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 279265 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 279265 ']' 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 279265 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.528 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279265 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279265' 00:32:00.787 killing process with pid 279265 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 279265 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 279265 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:00.787 04:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:00.787 04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:00.787 04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.787 04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.787 04:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:03.323 00:32:03.323 real 0m12.994s 00:32:03.323 user 0m24.169s 00:32:03.323 sys 0m5.972s 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.323 ************************************ 00:32:03.323 END TEST nvmf_nmic 00:32:03.323 ************************************ 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:03.323 ************************************ 00:32:03.323 START TEST nvmf_fio_target 00:32:03.323 ************************************ 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:03.323 * Looking for test storage... 00:32:03.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:03.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.323 --rc genhtml_branch_coverage=1 00:32:03.323 --rc genhtml_function_coverage=1 00:32:03.323 --rc genhtml_legend=1 00:32:03.323 --rc geninfo_all_blocks=1 00:32:03.323 --rc geninfo_unexecuted_blocks=1 00:32:03.323 00:32:03.323 ' 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:03.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.323 --rc genhtml_branch_coverage=1 00:32:03.323 --rc genhtml_function_coverage=1 00:32:03.323 --rc genhtml_legend=1 00:32:03.323 --rc geninfo_all_blocks=1 00:32:03.323 --rc geninfo_unexecuted_blocks=1 00:32:03.323 00:32:03.323 ' 00:32:03.323 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:03.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.323 --rc genhtml_branch_coverage=1 00:32:03.323 --rc genhtml_function_coverage=1 00:32:03.323 --rc genhtml_legend=1 00:32:03.323 --rc geninfo_all_blocks=1 00:32:03.323 --rc geninfo_unexecuted_blocks=1 00:32:03.323 00:32:03.323 ' 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:03.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.324 --rc genhtml_branch_coverage=1 00:32:03.324 --rc genhtml_function_coverage=1 00:32:03.324 --rc genhtml_legend=1 00:32:03.324 --rc geninfo_all_blocks=1 00:32:03.324 --rc geninfo_unexecuted_blocks=1 00:32:03.324 00:32:03.324 ' 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:03.324 04:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:09.893 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:09.893 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:09.893 Found net devices under 0000:af:00.0: cvl_0_0 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:09.893 Found net devices under 0000:af:00.1: cvl_0_1 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:09.893 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.894 04:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:09.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:32:09.894 00:32:09.894 --- 10.0.0.2 ping statistics --- 00:32:09.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.894 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:32:09.894 00:32:09.894 --- 10.0.0.1 ping statistics --- 00:32:09.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.894 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=283613 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 283613 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 283613 ']' 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.894 [2024-12-10 04:19:08.261712] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:09.894 [2024-12-10 04:19:08.262637] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:32:09.894 [2024-12-10 04:19:08.262672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.894 [2024-12-10 04:19:08.341269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:09.894 [2024-12-10 04:19:08.383594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.894 [2024-12-10 04:19:08.383631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.894 [2024-12-10 04:19:08.383639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.894 [2024-12-10 04:19:08.383646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.894 [2024-12-10 04:19:08.383651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.894 [2024-12-10 04:19:08.384999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.894 [2024-12-10 04:19:08.385110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.894 [2024-12-10 04:19:08.385215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.894 [2024-12-10 04:19:08.385216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.894 [2024-12-10 04:19:08.454859] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:09.894 [2024-12-10 04:19:08.455760] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:09.894 [2024-12-10 04:19:08.455954] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:09.894 [2024-12-10 04:19:08.456386] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:09.894 [2024-12-10 04:19:08.456408] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:09.894 [2024-12-10 04:19:08.702008] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:09.894 04:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.153 04:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:10.153 04:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.153 04:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:10.154 04:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.412 04:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:10.412 04:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:10.671 04:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.929 04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:10.929 04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:11.188 04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:11.188 04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:11.188 04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:11.188 04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:11.446 04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:11.705 04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:11.705 04:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:11.963 04:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:11.963 04:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:11.963 04:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:12.221 [2024-12-10 04:19:11.393931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.221 04:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:12.480 04:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:12.738 04:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:12.997 04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:12.997 04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:12.997 04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:12.997 04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:12.997 04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:12.997 04:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:14.901 04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:14.901 04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:14.901 04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:14.901 04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:14.901 04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:14.901 04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:14.901 04:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:14.901 [global] 00:32:14.901 thread=1 00:32:14.901 invalidate=1 00:32:14.901 rw=write 00:32:14.901 time_based=1 00:32:14.901 runtime=1 00:32:14.901 ioengine=libaio 00:32:14.901 direct=1 00:32:14.901 bs=4096 00:32:14.901 iodepth=1 00:32:14.901 norandommap=0 00:32:14.901 numjobs=1 00:32:14.901 00:32:14.901 verify_dump=1 00:32:14.901 verify_backlog=512 00:32:14.901 verify_state_save=0 00:32:14.901 do_verify=1 00:32:14.901 verify=crc32c-intel 00:32:14.901 [job0] 00:32:14.901 filename=/dev/nvme0n1 00:32:14.901 [job1] 00:32:14.901 filename=/dev/nvme0n2 00:32:14.901 [job2] 00:32:14.901 filename=/dev/nvme0n3 00:32:14.901 [job3] 00:32:14.901 filename=/dev/nvme0n4 00:32:15.169 Could not set queue depth (nvme0n1) 00:32:15.169 Could not set queue depth (nvme0n2) 00:32:15.169 Could not set queue depth (nvme0n3) 00:32:15.169 Could not set queue depth (nvme0n4) 00:32:15.429 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:15.429 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:15.429 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:15.429 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:15.429 fio-3.35 00:32:15.429 Starting 4 threads 00:32:16.800 00:32:16.800 job0: (groupid=0, jobs=1): err= 0: pid=284861: Tue Dec 10 04:19:15 2024 00:32:16.800 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:32:16.800 slat (nsec): min=9639, max=22666, avg=13928.91, stdev=3851.12 00:32:16.800 clat (usec): min=40656, max=41088, avg=40965.55, stdev=90.63 00:32:16.800 lat (usec): min=40665, max=41099, avg=40979.48, stdev=91.04 00:32:16.800 clat percentiles (usec): 00:32:16.800 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:16.800 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:16.800 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:16.800 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:16.800 | 99.99th=[41157] 00:32:16.800 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:32:16.800 slat (nsec): min=10014, max=39735, avg=14401.79, stdev=4760.92 00:32:16.800 clat (usec): min=135, max=430, avg=214.57, stdev=41.29 00:32:16.800 lat (usec): min=145, max=442, avg=228.97, stdev=42.77 00:32:16.800 clat percentiles (usec): 00:32:16.800 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 165], 00:32:16.800 | 30.00th=[ 184], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:32:16.800 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 269], 00:32:16.800 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 433], 99.95th=[ 433], 00:32:16.800 | 99.99th=[ 433] 00:32:16.800 bw ( KiB/s): min= 4096, max= 4096, per=25.95%, avg=4096.00, stdev= 0.00, samples=1 00:32:16.800 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:16.800 lat (usec) : 250=80.90%, 500=14.98% 00:32:16.800 lat (msec) : 50=4.12% 00:32:16.800 cpu : usr=0.20%, sys=1.08%, ctx=537, majf=0, minf=1 00:32:16.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.800 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.800 job1: (groupid=0, jobs=1): err= 0: pid=284862: Tue Dec 10 04:19:15 2024 00:32:16.800 read: IOPS=2440, BW=9762KiB/s (9997kB/s)(9772KiB/1001msec) 00:32:16.800 slat (nsec): min=6756, max=44387, avg=7831.02, stdev=1682.23 00:32:16.800 clat (usec): min=173, max=4105, avg=227.72, stdev=83.68 00:32:16.800 lat (usec): min=186, max=4111, avg=235.55, stdev=83.68 00:32:16.800 clat percentiles (usec): 00:32:16.800 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:32:16.800 | 30.00th=[ 198], 40.00th=[ 225], 50.00th=[ 243], 60.00th=[ 245], 00:32:16.800 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 255], 00:32:16.800 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 441], 99.95th=[ 478], 00:32:16.800 | 99.99th=[ 4113] 00:32:16.800 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:16.800 slat (nsec): min=9468, max=40382, avg=10891.11, stdev=1795.57 00:32:16.800 clat (usec): min=117, max=443, avg=149.47, stdev=33.17 00:32:16.800 lat (usec): min=131, max=454, avg=160.37, stdev=33.73 00:32:16.800 clat percentiles (usec): 00:32:16.800 | 1.00th=[ 126], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 133], 00:32:16.800 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:32:16.800 | 70.00th=[ 141], 80.00th=[ 157], 90.00th=[ 208], 95.00th=[ 231], 00:32:16.800 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 293], 99.95th=[ 330], 00:32:16.800 | 99.99th=[ 445] 00:32:16.800 bw ( KiB/s): min=10576, max=10576, per=67.00%, avg=10576.00, stdev= 0.00, samples=1 00:32:16.800 iops : min= 2644, max= 2644, avg=2644.00, stdev= 0.00, samples=1 00:32:16.800 lat (usec) : 250=92.02%, 500=7.96% 00:32:16.800 lat (msec) : 10=0.02% 00:32:16.800 cpu : usr=2.80%, sys=8.70%, ctx=5003, majf=0, minf=2 00:32:16.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.800 issued rwts: total=2443,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.800 job2: (groupid=0, jobs=1): err= 0: pid=284863: Tue Dec 10 04:19:15 2024 00:32:16.800 read: IOPS=24, BW=96.3KiB/s (98.7kB/s)(100KiB/1038msec) 00:32:16.800 slat (nsec): min=8367, max=27542, avg=22705.84, stdev=4501.61 00:32:16.800 clat (usec): min=339, max=41323, avg=37716.32, stdev=11244.08 00:32:16.800 lat (usec): min=348, max=41332, avg=37739.03, stdev=11245.27 00:32:16.800 clat percentiles (usec): 00:32:16.800 | 1.00th=[ 338], 5.00th=[ 379], 10.00th=[40633], 20.00th=[41157], 00:32:16.800 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:16.800 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:16.800 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:16.800 | 99.99th=[41157] 00:32:16.800 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:32:16.800 slat (nsec): min=9659, max=36712, avg=11678.96, stdev=2087.32 00:32:16.800 clat (usec): min=139, max=275, avg=169.06, stdev=12.57 00:32:16.800 lat (usec): min=153, max=311, avg=180.74, stdev=13.01 00:32:16.800 clat percentiles (usec): 00:32:16.800 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:32:16.800 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:32:16.800 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 190], 00:32:16.800 | 99.00th=[ 202], 99.50th=[ 217], 99.90th=[ 277], 99.95th=[ 277], 00:32:16.800 | 99.99th=[ 277] 00:32:16.800 bw ( KiB/s): min= 4096, max= 4096, per=25.95%, avg=4096.00, stdev= 0.00, samples=1 00:32:16.800 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:16.800 lat (usec) : 250=95.16%, 500=0.56% 00:32:16.800 lat (msec) : 50=4.28% 00:32:16.800 cpu : usr=0.29%, sys=0.58%, ctx=538, majf=0, minf=1 00:32:16.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.800 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.800 job3: (groupid=0, jobs=1): err= 0: pid=284864: Tue Dec 10 04:19:15 2024 00:32:16.800 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:32:16.800 slat (nsec): min=10908, max=25863, avg=22826.86, stdev=2765.56 00:32:16.800 clat (usec): min=40769, max=41130, avg=40960.30, stdev=73.92 00:32:16.800 lat (usec): min=40779, max=41153, avg=40983.13, stdev=75.46 00:32:16.800 clat percentiles (usec): 00:32:16.800 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:16.800 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:16.800 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:16.800 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:16.800 | 99.99th=[41157] 00:32:16.800 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:32:16.800 slat (nsec): min=10631, max=37708, avg=12236.56, stdev=2194.41 00:32:16.800 clat (usec): min=154, max=323, avg=179.59, stdev=13.77 00:32:16.800 lat (usec): min=165, max=361, avg=191.83, stdev=14.69 00:32:16.800 clat percentiles (usec): 00:32:16.800 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:32:16.800 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:32:16.800 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:32:16.800 | 99.00th=[ 215], 99.50th=[ 233], 99.90th=[ 326], 99.95th=[ 326], 00:32:16.800 | 99.99th=[ 326] 00:32:16.800 bw ( KiB/s): min= 4096, max= 4096, per=25.95%, avg=4096.00, stdev= 0.00, samples=1 00:32:16.800 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:16.800 lat (usec) : 250=95.51%, 500=0.37% 00:32:16.800 lat (msec) : 50=4.12% 00:32:16.800 cpu : usr=0.60%, sys=0.80%, ctx=536, majf=0, minf=1 00:32:16.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.800 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.800 00:32:16.800 Run status group 0 (all jobs): 00:32:16.800 READ: bw=9680KiB/s (9912kB/s), 86.2KiB/s-9762KiB/s (88.3kB/s-9997kB/s), io=9.81MiB (10.3MB), run=1001-1038msec 00:32:16.800 WRITE: bw=15.4MiB/s (16.2MB/s), 1973KiB/s-9.99MiB/s (2020kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1038msec 00:32:16.800 00:32:16.800 Disk stats (read/write): 00:32:16.800 nvme0n1: ios=41/512, merge=0/0, ticks=1559/104, in_queue=1663, util=85.97% 00:32:16.800 nvme0n2: ios=2098/2266, merge=0/0, ticks=505/310, in_queue=815, util=91.07% 00:32:16.800 nvme0n3: ios=43/512, merge=0/0, ticks=1642/85, in_queue=1727, util=93.56% 00:32:16.800 nvme0n4: ios=40/512, merge=0/0, ticks=1641/84, in_queue=1725, util=94.23% 00:32:16.800 04:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:16.800 [global] 00:32:16.800 thread=1 00:32:16.801 invalidate=1 00:32:16.801 rw=randwrite 00:32:16.801 time_based=1 00:32:16.801 runtime=1 00:32:16.801 ioengine=libaio 00:32:16.801 direct=1 00:32:16.801 bs=4096 00:32:16.801 iodepth=1 00:32:16.801 norandommap=0 00:32:16.801 numjobs=1 00:32:16.801 00:32:16.801 verify_dump=1 00:32:16.801 verify_backlog=512 00:32:16.801 verify_state_save=0 00:32:16.801 do_verify=1 00:32:16.801 verify=crc32c-intel 00:32:16.801 [job0] 00:32:16.801 filename=/dev/nvme0n1 00:32:16.801 [job1] 00:32:16.801 filename=/dev/nvme0n2 00:32:16.801 [job2] 00:32:16.801 filename=/dev/nvme0n3 00:32:16.801 [job3] 00:32:16.801 filename=/dev/nvme0n4 00:32:16.801 Could not set queue depth (nvme0n1) 00:32:16.801 Could not set queue depth (nvme0n2) 00:32:16.801 Could not set queue depth (nvme0n3) 00:32:16.801 Could not set queue depth (nvme0n4) 00:32:16.801 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:16.801 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:16.801 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:16.801 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:16.801 fio-3.35 00:32:16.801 Starting 4 threads 00:32:18.170 00:32:18.170 job0: (groupid=0, jobs=1): err= 0: pid=285225: Tue Dec 10 04:19:17 2024 00:32:18.170 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:32:18.170 slat (nsec): min=9667, max=22005, avg=15773.82, stdev=4629.44 00:32:18.170 clat (usec): min=40776, max=41061, avg=40973.57, stdev=51.38 00:32:18.170 lat (usec): min=40786, max=41072, avg=40989.34, stdev=51.16 00:32:18.170 clat percentiles (usec): 00:32:18.170 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:18.170 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:18.170 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:18.170 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:18.170 | 99.99th=[41157] 00:32:18.170 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:32:18.170 slat (nsec): min=9438, max=63454, avg=11005.94, stdev=2978.54 00:32:18.170 clat (usec): min=139, max=325, avg=195.26, stdev=21.91 00:32:18.170 lat (usec): min=150, max=371, avg=206.27, stdev=22.56 00:32:18.170 clat percentiles (usec): 00:32:18.170 | 1.00th=[ 151], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 180], 00:32:18.170 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:32:18.170 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 231], 00:32:18.170 | 99.00th=[ 265], 99.50th=[ 293], 99.90th=[ 326], 99.95th=[ 326], 00:32:18.170 | 99.99th=[ 326] 00:32:18.170 bw ( KiB/s): min= 4087, max= 4087, per=33.66%, avg=4087.00, stdev= 0.00, samples=1 00:32:18.170 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:32:18.170 lat (usec) : 250=94.38%, 500=1.50% 00:32:18.170 lat (msec) : 50=4.12% 00:32:18.170 cpu : usr=0.50%, sys=0.69%, ctx=535, majf=0, minf=1 00:32:18.170 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:18.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.170 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.170 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:18.170 job1: (groupid=0, jobs=1): err= 0: pid=285226: Tue Dec 10 04:19:17 2024 00:32:18.170 read: IOPS=40, BW=163KiB/s (166kB/s)(164KiB/1009msec) 00:32:18.170 slat (nsec): min=7188, max=24654, avg=16310.56, stdev=6845.02 00:32:18.170 clat (usec): min=193, max=41204, avg=22046.80, stdev=20499.62 00:32:18.170 lat (usec): min=204, max=41212, avg=22063.12, stdev=20499.97 00:32:18.170 clat percentiles (usec): 00:32:18.170 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 231], 00:32:18.170 | 30.00th=[ 285], 40.00th=[ 330], 50.00th=[40633], 60.00th=[40633], 00:32:18.170 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:18.170 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:18.170 | 99.99th=[41157] 00:32:18.170 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:32:18.170 slat (nsec): min=9498, max=45241, avg=10977.83, stdev=2234.80 00:32:18.170 clat (usec): min=142, max=359, avg=189.30, stdev=24.36 00:32:18.170 lat (usec): min=152, max=404, avg=200.27, stdev=24.63 00:32:18.170 clat percentiles (usec): 00:32:18.170 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 172], 00:32:18.170 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:32:18.170 | 70.00th=[ 194], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 229], 00:32:18.170 | 99.00th=[ 255], 99.50th=[ 314], 99.90th=[ 359], 99.95th=[ 359], 00:32:18.170 | 99.99th=[ 359] 00:32:18.170 bw ( KiB/s): min= 4096, max= 4096, per=33.73%, avg=4096.00, stdev= 0.00, samples=1 00:32:18.170 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:18.170 lat (usec) : 250=93.49%, 500=2.53% 00:32:18.170 lat (msec) : 50=3.98% 00:32:18.170 cpu : usr=0.20%, sys=0.99%, ctx=554, majf=0, minf=1 00:32:18.170 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:18.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.170 issued rwts: total=41,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.170 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:18.170 job2: (groupid=0, jobs=1): err= 0: pid=285227: Tue Dec 10 04:19:17 2024 00:32:18.170 read: IOPS=1024, BW=4099KiB/s (4197kB/s)(4148KiB/1012msec) 00:32:18.170 slat (nsec): min=7208, max=26441, avg=8261.86, stdev=1874.48 00:32:18.170 clat (usec): min=185, max=41225, avg=710.37, stdev=4540.41 00:32:18.170 lat (usec): min=193, max=41236, avg=718.64, stdev=4541.83 00:32:18.170 clat percentiles (usec): 00:32:18.170 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 192], 00:32:18.170 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 196], 60.00th=[ 198], 00:32:18.170 | 70.00th=[ 200], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 219], 00:32:18.170 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:18.170 | 99.99th=[41157] 00:32:18.170 write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets 00:32:18.170 slat (nsec): min=10218, max=42861, avg=11613.34, stdev=2149.51 00:32:18.170 clat (usec): min=123, max=309, avg=156.90, stdev=24.69 00:32:18.170 lat (usec): min=143, max=327, avg=168.51, stdev=25.18 00:32:18.170 clat percentiles (usec): 00:32:18.170 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 139], 00:32:18.170 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 151], 00:32:18.170 | 70.00th=[ 167], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 194], 00:32:18.170 | 99.00th=[ 251], 99.50th=[ 281], 99.90th=[ 306], 99.95th=[ 310], 00:32:18.170 | 99.99th=[ 310] 00:32:18.170 bw ( KiB/s): min=12263, max=12263, per=100.00%, avg=12263.00, stdev= 0.00, samples=1 00:32:18.170 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:32:18.170 lat (usec) : 250=98.33%, 500=1.17% 00:32:18.170 lat (msec) : 50=0.51% 00:32:18.170 cpu : usr=2.47%, sys=3.66%, ctx=2575, majf=0, minf=1 00:32:18.170 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:18.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.170 issued rwts: total=1037,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.170 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:18.170 job3: (groupid=0, jobs=1): err= 0: pid=285229: Tue Dec 10 04:19:17 2024 00:32:18.170 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:32:18.170 slat (nsec): min=9516, max=24468, avg=22085.59, stdev=2925.67 00:32:18.170 clat (usec): min=40530, max=41102, avg=40949.17, stdev=116.43 00:32:18.170 lat (usec): min=40540, max=41123, avg=40971.25, stdev=118.40 00:32:18.170 clat percentiles (usec): 00:32:18.170 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:18.170 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:18.170 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:18.170 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:18.170 | 99.99th=[41157] 00:32:18.170 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:32:18.170 slat (nsec): min=10476, max=45949, avg=12043.49, stdev=2359.45 00:32:18.170 clat (usec): min=147, max=905, avg=198.16, stdev=56.19 00:32:18.170 lat (usec): min=158, max=917, avg=210.20, stdev=56.36 00:32:18.170 clat percentiles (usec): 00:32:18.170 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 180], 00:32:18.170 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 196], 00:32:18.170 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 237], 00:32:18.170 | 99.00th=[ 289], 99.50th=[ 816], 99.90th=[ 906], 99.95th=[ 906], 00:32:18.170 | 99.99th=[ 906] 00:32:18.170 bw ( KiB/s): min= 4096, max= 4096, per=33.73%, avg=4096.00, stdev= 0.00, samples=1 00:32:18.170 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:18.170 lat (usec) : 250=93.45%, 500=1.87%, 1000=0.56% 00:32:18.170 lat (msec) : 50=4.12% 00:32:18.170 cpu : usr=0.20%, sys=1.19%, ctx=535, majf=0, minf=1 00:32:18.170 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:18.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.170 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.170 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:18.170 00:32:18.170 Run status group 0 (all jobs): 00:32:18.170 READ: bw=4435KiB/s (4541kB/s), 87.0KiB/s-4099KiB/s (89.1kB/s-4197kB/s), io=4488KiB (4596kB), run=1009-1012msec 00:32:18.170 WRITE: bw=11.9MiB/s (12.4MB/s), 2026KiB/s-6071KiB/s (2074kB/s-6217kB/s), io=12.0MiB (12.6MB), run=1009-1012msec 00:32:18.170 00:32:18.170 Disk stats (read/write): 00:32:18.170 nvme0n1: ios=68/512, merge=0/0, ticks=751/91, in_queue=842, util=86.87% 00:32:18.170 nvme0n2: ios=58/512, merge=0/0, ticks=1725/97, in_queue=1822, util=98.48% 00:32:18.170 nvme0n3: ios=1057/1536, merge=0/0, ticks=1543/232, in_queue=1775, util=98.54% 00:32:18.170 nvme0n4: ios=43/512, merge=0/0, ticks=1723/93, in_queue=1816, util=98.43% 00:32:18.170 04:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:18.170 [global] 00:32:18.170 thread=1 00:32:18.170 invalidate=1 00:32:18.170 rw=write 00:32:18.170 time_based=1 00:32:18.170 runtime=1 00:32:18.170 ioengine=libaio 00:32:18.170 direct=1 00:32:18.170 bs=4096 00:32:18.170 iodepth=128 00:32:18.170 norandommap=0 00:32:18.170 numjobs=1 00:32:18.170 00:32:18.170 verify_dump=1 00:32:18.170 verify_backlog=512 00:32:18.170 verify_state_save=0 00:32:18.170 do_verify=1 00:32:18.170 verify=crc32c-intel 00:32:18.170 [job0] 00:32:18.170 filename=/dev/nvme0n1 00:32:18.170 [job1] 00:32:18.170 filename=/dev/nvme0n2 00:32:18.170 [job2] 00:32:18.170 filename=/dev/nvme0n3 00:32:18.170 [job3] 00:32:18.170 filename=/dev/nvme0n4 00:32:18.170 Could not set queue depth (nvme0n1) 00:32:18.170 Could not set queue depth (nvme0n2) 00:32:18.170 Could not set queue depth (nvme0n3) 00:32:18.170 Could not set queue depth (nvme0n4) 00:32:18.428 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:18.428 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:18.428 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:18.428 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:18.428 fio-3.35 00:32:18.428 Starting 4 threads 00:32:19.801 00:32:19.801 job0: (groupid=0, jobs=1): err= 0: pid=285594: Tue Dec 10 04:19:18 2024 00:32:19.801 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:32:19.801 slat (nsec): min=1574, max=32916k, avg=301971.75, stdev=2028952.34 00:32:19.801 clat (usec): min=7890, max=92576, avg=38889.91, stdev=18636.63 00:32:19.801 lat (usec): min=7897, max=92640, avg=39191.88, stdev=18734.32 00:32:19.801 clat percentiles (usec): 00:32:19.801 | 1.00th=[ 7963], 5.00th=[10159], 10.00th=[10421], 20.00th=[22676], 00:32:19.801 | 30.00th=[27132], 40.00th=[32113], 50.00th=[39584], 60.00th=[44303], 00:32:19.801 | 70.00th=[49021], 80.00th=[55837], 90.00th=[63701], 95.00th=[73925], 00:32:19.801 | 99.00th=[73925], 99.50th=[79168], 99.90th=[79168], 99.95th=[89654], 00:32:19.801 | 99.99th=[92799] 00:32:19.801 write: IOPS=2197, BW=8791KiB/s (9002kB/s)(8888KiB/1011msec); 0 zone resets 00:32:19.801 slat (usec): min=2, max=18684, avg=168.33, stdev=1065.16 00:32:19.801 clat (usec): min=3045, max=50944, avg=21782.72, stdev=12167.65 00:32:19.801 lat (usec): min=4495, max=52046, avg=21951.05, stdev=12232.48 00:32:19.801 clat percentiles (usec): 00:32:19.801 | 1.00th=[ 4948], 5.00th=[ 5211], 10.00th=[ 5276], 20.00th=[ 8094], 00:32:19.801 | 30.00th=[13304], 40.00th=[17433], 50.00th=[22414], 60.00th=[26346], 00:32:19.801 | 70.00th=[28443], 80.00th=[33424], 90.00th=[40109], 95.00th=[40633], 00:32:19.801 | 99.00th=[43254], 99.50th=[44827], 99.90th=[44827], 99.95th=[46400], 00:32:19.801 | 99.99th=[51119] 00:32:19.801 bw ( KiB/s): min= 7648, max= 9104, per=13.00%, avg=8376.00, stdev=1029.55, samples=2 00:32:19.801 iops : min= 1912, max= 2276, avg=2094.00, stdev=257.39, samples=2 00:32:19.801 lat (msec) : 4=0.02%, 10=14.85%, 20=17.31%, 50=54.38%, 100=13.44% 00:32:19.801 cpu : usr=1.19%, sys=3.47%, ctx=227, majf=0, minf=1 00:32:19.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:32:19.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:19.801 issued rwts: total=2048,2222,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:19.801 job1: (groupid=0, jobs=1): err= 0: pid=285595: Tue Dec 10 04:19:18 2024 00:32:19.801 read: IOPS=3964, BW=15.5MiB/s (16.2MB/s)(15.7MiB/1012msec) 00:32:19.801 slat (nsec): min=1128, max=26820k, avg=124008.99, stdev=1155093.25 00:32:19.801 clat (usec): min=1628, max=67742, avg=16413.46, stdev=12298.40 00:32:19.801 lat (usec): min=3776, max=67764, avg=16537.47, stdev=12393.33 00:32:19.801 clat percentiles (usec): 00:32:19.801 | 1.00th=[ 3785], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 8586], 00:32:19.801 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10683], 60.00th=[13042], 00:32:19.801 | 70.00th=[15270], 80.00th=[22938], 90.00th=[36439], 95.00th=[44303], 00:32:19.801 | 99.00th=[55313], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:32:19.802 | 99.99th=[67634] 00:32:19.802 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec); 0 zone resets 00:32:19.802 slat (usec): min=2, max=22950, avg=114.61, stdev=987.33 00:32:19.802 clat (usec): min=3058, max=64864, avg=14503.56, stdev=11140.61 00:32:19.802 lat (usec): min=3066, max=64871, avg=14618.17, stdev=11227.11 00:32:19.802 clat percentiles (usec): 00:32:19.802 | 1.00th=[ 5473], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 7832], 00:32:19.802 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[10683], 00:32:19.802 | 70.00th=[13435], 80.00th=[19268], 90.00th=[28705], 95.00th=[39584], 00:32:19.802 | 99.00th=[62129], 99.50th=[63701], 99.90th=[64226], 99.95th=[64226], 00:32:19.802 | 99.99th=[64750] 00:32:19.802 bw ( KiB/s): min=12288, max=20480, per=25.44%, avg=16384.00, stdev=5792.62, samples=2 00:32:19.802 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:32:19.802 lat (msec) : 2=0.01%, 4=0.73%, 10=45.08%, 20=34.27%, 50=17.35% 00:32:19.802 lat (msec) : 100=2.55% 00:32:19.802 cpu : usr=3.26%, sys=5.44%, ctx=224, majf=0, minf=1 00:32:19.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:19.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:19.802 issued rwts: total=4012,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:19.802 job2: (groupid=0, jobs=1): err= 0: pid=285596: Tue Dec 10 04:19:18 2024 00:32:19.802 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:32:19.802 slat (nsec): min=1115, max=14174k, avg=101428.68, stdev=797346.99 00:32:19.802 clat (usec): min=4441, max=51346, avg=14366.49, stdev=4549.79 00:32:19.802 lat (usec): min=4444, max=51347, avg=14467.92, stdev=4596.91 00:32:19.802 clat percentiles (usec): 00:32:19.802 | 1.00th=[ 4621], 5.00th=[ 7832], 10.00th=[ 9241], 20.00th=[11338], 00:32:19.802 | 30.00th=[12125], 40.00th=[13042], 50.00th=[14353], 60.00th=[14877], 00:32:19.802 | 70.00th=[15795], 80.00th=[16909], 90.00th=[19792], 95.00th=[21890], 00:32:19.802 | 99.00th=[28967], 99.50th=[34341], 99.90th=[45351], 99.95th=[45351], 00:32:19.802 | 99.99th=[51119] 00:32:19.802 write: IOPS=4578, BW=17.9MiB/s (18.8MB/s)(18.1MiB/1011msec); 0 zone resets 00:32:19.802 slat (usec): min=2, max=21747, avg=102.18, stdev=867.10 00:32:19.802 clat (usec): min=2024, max=42014, avg=12747.80, stdev=4632.74 00:32:19.802 lat (usec): min=2832, max=42044, avg=12849.98, stdev=4685.69 00:32:19.802 clat percentiles (usec): 00:32:19.802 | 1.00th=[ 6521], 5.00th=[ 7963], 10.00th=[ 9372], 20.00th=[10290], 00:32:19.802 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[12518], 00:32:19.802 | 70.00th=[12911], 80.00th=[13960], 90.00th=[16319], 95.00th=[19792], 00:32:19.802 | 99.00th=[33424], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:19.802 | 99.99th=[42206] 00:32:19.802 bw ( KiB/s): min=17656, max=19208, per=28.61%, avg=18432.00, stdev=1097.43, samples=2 00:32:19.802 iops : min= 4414, max= 4802, avg=4608.00, stdev=274.36, samples=2 00:32:19.802 lat (msec) : 4=0.10%, 10=14.87%, 20=78.91%, 50=6.11%, 100=0.01% 00:32:19.802 cpu : usr=3.66%, sys=5.25%, ctx=276, majf=0, minf=1 00:32:19.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:19.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:19.802 issued rwts: total=4608,4629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:19.802 job3: (groupid=0, jobs=1): err= 0: pid=285597: Tue Dec 10 04:19:18 2024 00:32:19.802 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:32:19.802 slat (nsec): min=1189, max=14511k, avg=78443.22, stdev=687813.69 00:32:19.802 clat (usec): min=2499, max=37715, avg=12233.20, stdev=4724.22 00:32:19.802 lat (usec): min=2506, max=37720, avg=12311.65, stdev=4780.38 00:32:19.802 clat percentiles (usec): 00:32:19.802 | 1.00th=[ 3720], 5.00th=[ 5407], 10.00th=[ 7308], 20.00th=[ 8291], 00:32:19.802 | 30.00th=[ 9110], 40.00th=[10421], 50.00th=[11207], 60.00th=[12780], 00:32:19.802 | 70.00th=[14222], 80.00th=[16450], 90.00th=[18744], 95.00th=[20579], 00:32:19.802 | 99.00th=[25822], 99.50th=[27132], 99.90th=[37487], 99.95th=[37487], 00:32:19.802 | 99.99th=[37487] 00:32:19.802 write: IOPS=5302, BW=20.7MiB/s (21.7MB/s)(20.9MiB/1009msec); 0 zone resets 00:32:19.802 slat (nsec): min=1969, max=21755k, avg=77995.62, stdev=662693.04 00:32:19.802 clat (usec): min=1066, max=40717, avg=11737.98, stdev=6505.32 00:32:19.802 lat (usec): min=1075, max=40721, avg=11815.98, stdev=6545.92 00:32:19.802 clat percentiles (usec): 00:32:19.802 | 1.00th=[ 2573], 5.00th=[ 4228], 10.00th=[ 5932], 20.00th=[ 7898], 00:32:19.802 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[11076], 00:32:19.802 | 70.00th=[13173], 80.00th=[14484], 90.00th=[19006], 95.00th=[22938], 00:32:19.802 | 99.00th=[40109], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:32:19.802 | 99.99th=[40633] 00:32:19.802 bw ( KiB/s): min=20472, max=21312, per=32.43%, avg=20892.00, stdev=593.97, samples=2 00:32:19.802 iops : min= 5118, max= 5328, avg=5223.00, stdev=148.49, samples=2 00:32:19.802 lat (msec) : 2=0.24%, 4=2.86%, 10=40.74%, 20=48.40%, 50=7.77% 00:32:19.802 cpu : usr=3.77%, sys=5.75%, ctx=337, majf=0, minf=1 00:32:19.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:19.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:19.802 issued rwts: total=5120,5350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:19.802 00:32:19.802 Run status group 0 (all jobs): 00:32:19.802 READ: bw=60.9MiB/s (63.9MB/s), 8103KiB/s-19.8MiB/s (8297kB/s-20.8MB/s), io=61.7MiB (64.7MB), run=1009-1012msec 00:32:19.802 WRITE: bw=62.9MiB/s (66.0MB/s), 8791KiB/s-20.7MiB/s (9002kB/s-21.7MB/s), io=63.7MiB (66.8MB), run=1009-1012msec 00:32:19.802 00:32:19.802 Disk stats (read/write): 00:32:19.802 nvme0n1: ios=1761/2048, merge=0/0, ticks=19522/11734, in_queue=31256, util=96.29% 00:32:19.802 nvme0n2: ios=3635/3753, merge=0/0, ticks=38351/35385, in_queue=73736, util=91.99% 00:32:19.802 nvme0n3: ios=3643/3796, merge=0/0, ticks=35268/34136, in_queue=69404, util=96.76% 00:32:19.802 nvme0n4: ios=3650/4096, merge=0/0, ticks=40825/42382, in_queue=83207, util=98.24% 00:32:19.802 04:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:19.802 [global] 00:32:19.802 thread=1 00:32:19.802 invalidate=1 00:32:19.802 rw=randwrite 00:32:19.802 time_based=1 00:32:19.802 runtime=1 00:32:19.802 ioengine=libaio 00:32:19.802 direct=1 00:32:19.802 bs=4096 00:32:19.802 iodepth=128 00:32:19.802 norandommap=0 00:32:19.802 numjobs=1 00:32:19.802 00:32:19.802 verify_dump=1 00:32:19.802 verify_backlog=512 00:32:19.802 verify_state_save=0 00:32:19.802 do_verify=1 00:32:19.802 verify=crc32c-intel 00:32:19.802 [job0] 00:32:19.802 filename=/dev/nvme0n1 00:32:19.802 [job1] 00:32:19.802 filename=/dev/nvme0n2 00:32:19.802 [job2] 00:32:19.802 filename=/dev/nvme0n3 00:32:19.802 [job3] 00:32:19.802 filename=/dev/nvme0n4 00:32:19.802 Could not set queue depth (nvme0n1) 00:32:19.802 Could not set queue depth (nvme0n2) 00:32:19.802 Could not set queue depth (nvme0n3) 00:32:19.802 Could not set queue depth (nvme0n4) 00:32:20.060 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:20.060 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:20.060 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:20.060 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:20.060 fio-3.35 00:32:20.060 Starting 4 threads 00:32:21.432 00:32:21.432 job0: (groupid=0, jobs=1): err= 0: pid=285961: Tue Dec 10 04:19:20 2024 00:32:21.432 read: IOPS=3046, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:32:21.432 slat (nsec): min=1169, max=13802k, avg=159871.00, stdev=1049087.49 00:32:21.432 clat (usec): min=713, max=71693, avg=21090.11, stdev=9021.41 00:32:21.432 lat (usec): min=5277, max=71699, avg=21249.98, stdev=9076.15 00:32:21.432 clat percentiles (usec): 00:32:21.432 | 1.00th=[ 7701], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[13304], 00:32:21.432 | 30.00th=[16909], 40.00th=[18744], 50.00th=[19530], 60.00th=[21890], 00:32:21.432 | 70.00th=[23987], 80.00th=[24773], 90.00th=[31851], 95.00th=[38011], 00:32:21.432 | 99.00th=[52167], 99.50th=[59507], 99.90th=[71828], 99.95th=[71828], 00:32:21.432 | 99.99th=[71828] 00:32:21.432 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:32:21.432 slat (usec): min=2, max=15295, avg=161.10, stdev=955.07 00:32:21.432 clat (usec): min=4201, max=71483, avg=20264.95, stdev=9294.16 00:32:21.432 lat (usec): min=4213, max=71495, avg=20426.06, stdev=9373.83 00:32:21.432 clat percentiles (usec): 00:32:21.432 | 1.00th=[ 4424], 5.00th=[ 7832], 10.00th=[10028], 20.00th=[13698], 00:32:21.432 | 30.00th=[15533], 40.00th=[17433], 50.00th=[19268], 60.00th=[20841], 00:32:21.432 | 70.00th=[22414], 80.00th=[25297], 90.00th=[30016], 95.00th=[35914], 00:32:21.432 | 99.00th=[57934], 99.50th=[63701], 99.90th=[71828], 99.95th=[71828], 00:32:21.432 | 99.99th=[71828] 00:32:21.432 bw ( KiB/s): min=12216, max=12360, per=17.91%, avg=12288.00, stdev=101.82, samples=2 00:32:21.432 iops : min= 3054, max= 3090, avg=3072.00, stdev=25.46, samples=2 00:32:21.432 lat (usec) : 750=0.02% 00:32:21.432 lat (msec) : 10=7.50%, 20=46.25%, 50=44.70%, 100=1.53% 00:32:21.432 cpu : usr=2.09%, sys=3.88%, ctx=249, majf=0, minf=1 00:32:21.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:21.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:21.432 issued rwts: total=3062,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:21.432 job1: (groupid=0, jobs=1): err= 0: pid=285962: Tue Dec 10 04:19:20 2024 00:32:21.432 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:32:21.432 slat (nsec): min=1089, max=16299k, avg=88691.69, stdev=672701.97 00:32:21.432 clat (usec): min=769, max=57250, avg=12415.08, stdev=7117.60 00:32:21.432 lat (usec): min=775, max=57259, avg=12503.77, stdev=7139.19 00:32:21.432 clat percentiles (usec): 00:32:21.432 | 1.00th=[ 1844], 5.00th=[ 4293], 10.00th=[ 6587], 20.00th=[ 8356], 00:32:21.432 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10814], 60.00th=[11731], 00:32:21.432 | 70.00th=[13173], 80.00th=[15139], 90.00th=[19006], 95.00th=[24511], 00:32:21.432 | 99.00th=[42206], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:32:21.432 | 99.99th=[57410] 00:32:21.432 write: IOPS=5443, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1005msec); 0 zone resets 00:32:21.432 slat (nsec): min=1800, max=10131k, avg=81785.53, stdev=531271.12 00:32:21.432 clat (usec): min=250, max=63567, avg=11684.11, stdev=8555.62 00:32:21.432 lat (usec): min=264, max=63574, avg=11765.90, stdev=8596.34 00:32:21.432 clat percentiles (usec): 00:32:21.432 | 1.00th=[ 1336], 5.00th=[ 3064], 10.00th=[ 5800], 20.00th=[ 7701], 00:32:21.432 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:32:21.432 | 70.00th=[11600], 80.00th=[12780], 90.00th=[15533], 95.00th=[24773], 00:32:21.432 | 99.00th=[57934], 99.50th=[58459], 99.90th=[63701], 99.95th=[63701], 00:32:21.432 | 99.99th=[63701] 00:32:21.432 bw ( KiB/s): min=20480, max=22272, per=31.16%, avg=21376.00, stdev=1267.14, samples=2 00:32:21.432 iops : min= 5120, max= 5568, avg=5344.00, stdev=316.78, samples=2 00:32:21.432 lat (usec) : 500=0.02%, 750=0.06%, 1000=0.28% 00:32:21.432 lat (msec) : 2=1.66%, 4=3.77%, 10=32.02%, 20=54.67%, 50=6.06% 00:32:21.432 lat (msec) : 100=1.46% 00:32:21.432 cpu : usr=3.49%, sys=6.18%, ctx=414, majf=0, minf=1 00:32:21.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:21.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:21.432 issued rwts: total=5120,5471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:21.432 job2: (groupid=0, jobs=1): err= 0: pid=285963: Tue Dec 10 04:19:20 2024 00:32:21.432 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:32:21.432 slat (nsec): min=1090, max=11468k, avg=94366.14, stdev=646276.82 00:32:21.432 clat (usec): min=4492, max=39744, avg=12257.54, stdev=4447.64 00:32:21.432 lat (usec): min=4497, max=39753, avg=12351.90, stdev=4490.05 00:32:21.432 clat percentiles (usec): 00:32:21.432 | 1.00th=[ 4555], 5.00th=[ 7504], 10.00th=[ 8717], 20.00th=[ 9896], 00:32:21.432 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:32:21.432 | 70.00th=[11994], 80.00th=[13960], 90.00th=[16188], 95.00th=[20841], 00:32:21.432 | 99.00th=[30540], 99.50th=[30540], 99.90th=[39584], 99.95th=[39584], 00:32:21.432 | 99.99th=[39584] 00:32:21.432 write: IOPS=5487, BW=21.4MiB/s (22.5MB/s)(21.5MiB/1005msec); 0 zone resets 00:32:21.432 slat (nsec): min=1752, max=10333k, avg=88717.17, stdev=576042.58 00:32:21.432 clat (usec): min=457, max=36806, avg=11675.95, stdev=3808.65 00:32:21.432 lat (usec): min=4515, max=36813, avg=11764.67, stdev=3850.98 00:32:21.432 clat percentiles (usec): 00:32:21.432 | 1.00th=[ 5604], 5.00th=[ 7373], 10.00th=[ 9372], 20.00th=[ 9765], 00:32:21.432 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:32:21.432 | 70.00th=[11600], 80.00th=[11863], 90.00th=[14746], 95.00th=[19268], 00:32:21.432 | 99.00th=[29492], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:32:21.432 | 99.99th=[36963] 00:32:21.432 bw ( KiB/s): min=19768, max=23328, per=31.41%, avg=21548.00, stdev=2517.30, samples=2 00:32:21.432 iops : min= 4942, max= 5832, avg=5387.00, stdev=629.33, samples=2 00:32:21.432 lat (usec) : 500=0.01% 00:32:21.432 lat (msec) : 10=23.26%, 20=72.25%, 50=4.48% 00:32:21.432 cpu : usr=4.28%, sys=5.68%, ctx=382, majf=0, minf=1 00:32:21.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:21.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:21.432 issued rwts: total=5120,5515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:21.432 job3: (groupid=0, jobs=1): err= 0: pid=285964: Tue Dec 10 04:19:20 2024 00:32:21.432 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:32:21.432 slat (nsec): min=1702, max=12230k, avg=154365.57, stdev=931292.13 00:32:21.432 clat (usec): min=7911, max=45735, avg=19833.65, stdev=7396.33 00:32:21.432 lat (usec): min=8554, max=45763, avg=19988.02, stdev=7482.09 00:32:21.432 clat percentiles (usec): 00:32:21.432 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11863], 00:32:21.432 | 30.00th=[14222], 40.00th=[16909], 50.00th=[20055], 60.00th=[22152], 00:32:21.432 | 70.00th=[23725], 80.00th=[25822], 90.00th=[30278], 95.00th=[32375], 00:32:21.432 | 99.00th=[37487], 99.50th=[41157], 99.90th=[41157], 99.95th=[43779], 00:32:21.432 | 99.99th=[45876] 00:32:21.432 write: IOPS=3165, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1004msec); 0 zone resets 00:32:21.432 slat (usec): min=2, max=21605, avg=157.80, stdev=1035.40 00:32:21.432 clat (usec): min=3285, max=61494, avg=20265.98, stdev=7312.78 00:32:21.432 lat (usec): min=4042, max=61532, avg=20423.78, stdev=7396.95 00:32:21.432 clat percentiles (usec): 00:32:21.432 | 1.00th=[ 6325], 5.00th=[11994], 10.00th=[13698], 20.00th=[15270], 00:32:21.432 | 30.00th=[16712], 40.00th=[18220], 50.00th=[19268], 60.00th=[19792], 00:32:21.432 | 70.00th=[21365], 80.00th=[24511], 90.00th=[28443], 95.00th=[30278], 00:32:21.432 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52691], 99.95th=[55837], 00:32:21.432 | 99.99th=[61604] 00:32:21.432 bw ( KiB/s): min=12288, max=12288, per=17.91%, avg=12288.00, stdev= 0.00, samples=2 00:32:21.432 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:32:21.432 lat (msec) : 4=0.02%, 10=5.26%, 20=50.21%, 50=43.82%, 100=0.69% 00:32:21.432 cpu : usr=2.39%, sys=5.78%, ctx=210, majf=0, minf=1 00:32:21.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:21.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:21.432 issued rwts: total=3072,3178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:21.432 00:32:21.432 Run status group 0 (all jobs): 00:32:21.432 READ: bw=63.6MiB/s (66.7MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=64.0MiB (67.1MB), run=1004-1005msec 00:32:21.432 WRITE: bw=67.0MiB/s (70.2MB/s), 11.9MiB/s-21.4MiB/s (12.5MB/s-22.5MB/s), io=67.3MiB (70.6MB), run=1004-1005msec 00:32:21.432 00:32:21.432 Disk stats (read/write): 00:32:21.432 nvme0n1: ios=2615/2886, merge=0/0, ticks=18073/21814, in_queue=39887, util=87.37% 00:32:21.432 nvme0n2: ios=4305/4608, merge=0/0, ticks=32618/34462, in_queue=67080, util=90.47% 00:32:21.432 nvme0n3: ios=4422/4608, merge=0/0, ticks=24985/21765, in_queue=46750, util=94.70% 00:32:21.432 nvme0n4: ios=2511/2560, merge=0/0, ticks=18371/16927, in_queue=35298, util=94.35% 00:32:21.432 04:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:21.432 04:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=286190 00:32:21.432 04:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:21.432 04:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:21.432 [global] 00:32:21.432 thread=1 00:32:21.432 invalidate=1 00:32:21.432 rw=read 00:32:21.432 time_based=1 00:32:21.432 runtime=10 00:32:21.432 ioengine=libaio 00:32:21.432 direct=1 00:32:21.432 bs=4096 00:32:21.432 iodepth=1 00:32:21.432 norandommap=1 00:32:21.432 numjobs=1 00:32:21.432 00:32:21.432 [job0] 00:32:21.432 filename=/dev/nvme0n1 00:32:21.432 [job1] 00:32:21.432 filename=/dev/nvme0n2 00:32:21.432 [job2] 00:32:21.432 filename=/dev/nvme0n3 00:32:21.432 [job3] 00:32:21.432 filename=/dev/nvme0n4 00:32:21.432 Could not set queue depth (nvme0n1) 00:32:21.432 Could not set queue depth (nvme0n2) 00:32:21.432 Could not set queue depth (nvme0n3) 00:32:21.432 Could not set queue depth (nvme0n4) 00:32:21.689 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:21.689 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:21.689 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:21.689 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:21.689 fio-3.35 00:32:21.689 Starting 4 threads 00:32:24.210 04:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:24.466 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43073536, buflen=4096 00:32:24.466 fio: pid=286335, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:24.466 04:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:24.723 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=30838784, buflen=4096 00:32:24.723 fio: pid=286334, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:24.723 04:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:24.723 04:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:24.980 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46018560, buflen=4096 00:32:24.980 fio: pid=286326, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:24.980 04:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:24.980 04:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:25.238 04:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:25.238 04:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:25.238 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=344064, buflen=4096 00:32:25.238 fio: pid=286328, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:25.238 00:32:25.238 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=286326: Tue Dec 10 04:19:24 2024 00:32:25.238 read: IOPS=3632, BW=14.2MiB/s (14.9MB/s)(43.9MiB/3093msec) 00:32:25.238 slat (usec): min=5, max=26661, avg=11.05, stdev=291.43 00:32:25.238 clat (usec): min=169, max=41113, avg=260.69, stdev=389.34 00:32:25.238 lat (usec): min=176, max=41120, avg=271.74, stdev=486.92 00:32:25.238 clat percentiles (usec): 00:32:25.238 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 219], 20.00th=[ 229], 00:32:25.238 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:32:25.238 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 314], 95.00th=[ 392], 00:32:25.238 | 99.00th=[ 486], 99.50th=[ 506], 99.90th=[ 545], 99.95th=[ 562], 00:32:25.238 | 99.99th=[ 701] 00:32:25.238 bw ( KiB/s): min=13128, max=15949, per=41.53%, avg=14638.17, stdev=1047.09, samples=6 00:32:25.238 iops : min= 3282, max= 3987, avg=3659.50, stdev=261.71, samples=6 00:32:25.238 lat (usec) : 250=61.45%, 500=37.86%, 750=0.67% 00:32:25.238 lat (msec) : 50=0.01% 00:32:25.238 cpu : usr=0.97%, sys=3.30%, ctx=11238, majf=0, minf=2 00:32:25.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.238 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.238 issued rwts: total=11236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:25.238 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=286328: Tue Dec 10 04:19:24 2024 00:32:25.238 read: IOPS=25, BW=101KiB/s (103kB/s)(336KiB/3332msec) 00:32:25.238 slat (usec): min=8, max=14729, avg=195.35, stdev=1595.24 00:32:25.238 clat (usec): min=268, max=42442, avg=39210.95, stdev=8757.85 00:32:25.238 lat (usec): min=294, max=55809, avg=39408.35, stdev=8939.63 00:32:25.238 clat percentiles (usec): 00:32:25.238 | 1.00th=[ 269], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:25.238 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:25.238 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:32:25.238 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:25.238 | 99.99th=[42206] 00:32:25.238 bw ( KiB/s): min= 96, max= 114, per=0.29%, avg=101.67, stdev= 7.20, samples=6 00:32:25.238 iops : min= 24, max= 28, avg=25.33, stdev= 1.63, samples=6 00:32:25.238 lat (usec) : 500=4.71% 00:32:25.238 lat (msec) : 50=94.12% 00:32:25.238 cpu : usr=0.09%, sys=0.00%, ctx=86, majf=0, minf=2 00:32:25.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.238 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.238 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:25.238 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=286334: Tue Dec 10 04:19:24 2024 00:32:25.238 read: IOPS=2614, BW=10.2MiB/s (10.7MB/s)(29.4MiB/2880msec) 00:32:25.238 slat (nsec): min=6386, max=31654, avg=7416.08, stdev=1329.88 00:32:25.238 clat (usec): min=193, max=42444, avg=371.12, stdev=2251.76 00:32:25.238 lat (usec): min=200, max=42453, avg=378.53, stdev=2252.56 00:32:25.238 clat percentiles (usec): 00:32:25.238 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 235], 00:32:25.238 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:32:25.238 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:32:25.238 | 99.00th=[ 363], 99.50th=[ 449], 99.90th=[42206], 99.95th=[42206], 00:32:25.238 | 99.99th=[42206] 00:32:25.238 bw ( KiB/s): min= 104, max=15752, per=27.38%, avg=9651.20, stdev=8016.25, samples=5 00:32:25.238 iops : min= 26, max= 3938, avg=2412.80, stdev=2004.06, samples=5 00:32:25.238 lat (usec) : 250=61.91%, 500=37.68%, 750=0.09% 00:32:25.238 lat (msec) : 50=0.31% 00:32:25.238 cpu : usr=0.59%, sys=2.47%, ctx=7530, majf=0, minf=2 00:32:25.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.238 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.238 issued rwts: total=7530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:25.238 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=286335: Tue Dec 10 04:19:24 2024 00:32:25.238 read: IOPS=3930, BW=15.3MiB/s (16.1MB/s)(41.1MiB/2676msec) 00:32:25.238 slat (nsec): min=6126, max=34365, avg=7134.30, stdev=876.72 00:32:25.238 clat (usec): min=179, max=513, avg=244.14, stdev=32.90 00:32:25.238 lat (usec): min=186, max=520, avg=251.27, stdev=32.84 00:32:25.238 clat percentiles (usec): 00:32:25.238 | 1.00th=[ 194], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:32:25.238 | 30.00th=[ 227], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 249], 00:32:25.238 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 289], 00:32:25.238 | 99.00th=[ 396], 99.50th=[ 457], 99.90th=[ 506], 99.95th=[ 506], 00:32:25.238 | 99.99th=[ 515] 00:32:25.238 bw ( KiB/s): min=15056, max=17184, per=45.06%, avg=15884.80, stdev=985.02, samples=5 00:32:25.238 iops : min= 3764, max= 4296, avg=3971.20, stdev=246.25, samples=5 00:32:25.238 lat (usec) : 250=63.80%, 500=36.05%, 750=0.14% 00:32:25.238 cpu : usr=0.97%, sys=3.55%, ctx=10517, majf=0, minf=1 00:32:25.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.238 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.238 issued rwts: total=10517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:25.238 00:32:25.238 Run status group 0 (all jobs): 00:32:25.238 READ: bw=34.4MiB/s (36.1MB/s), 101KiB/s-15.3MiB/s (103kB/s-16.1MB/s), io=115MiB (120MB), run=2676-3332msec 00:32:25.238 00:32:25.238 Disk stats (read/write): 00:32:25.238 nvme0n1: ios=11214/0, merge=0/0, ticks=2858/0, in_queue=2858, util=93.13% 00:32:25.238 nvme0n2: ios=84/0, merge=0/0, ticks=3296/0, in_queue=3296, util=95.04% 00:32:25.238 nvme0n3: ios=7327/0, merge=0/0, ticks=2721/0, in_queue=2721, util=96.21% 00:32:25.238 nvme0n4: ios=10161/0, merge=0/0, ticks=2435/0, in_queue=2435, util=96.39% 00:32:25.495 04:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:25.495 04:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:25.495 04:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:25.495 04:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:25.752 04:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:25.752 04:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:26.008 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:26.008 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 286190 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:26.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:26.264 nvmf hotplug test: fio failed as expected 00:32:26.264 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:26.521 rmmod nvme_tcp 00:32:26.521 rmmod nvme_fabrics 00:32:26.521 rmmod nvme_keyring 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 283613 ']' 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 283613 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 283613 ']' 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 283613 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:26.521 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283613 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283613' 00:32:26.780 killing process with pid 283613 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 283613 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 283613 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.780 04:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.315 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:29.315 00:32:29.315 real 0m25.900s 00:32:29.315 user 1m31.523s 00:32:29.315 sys 0m11.279s 00:32:29.315 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.315 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.315 ************************************ 00:32:29.315 END TEST nvmf_fio_target 00:32:29.315 ************************************ 00:32:29.315 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:29.315 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:29.316 ************************************ 00:32:29.316 START TEST nvmf_bdevio 00:32:29.316 ************************************ 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:29.316 * Looking for test storage... 00:32:29.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:29.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.316 --rc genhtml_branch_coverage=1 00:32:29.316 --rc genhtml_function_coverage=1 00:32:29.316 --rc genhtml_legend=1 00:32:29.316 --rc geninfo_all_blocks=1 00:32:29.316 --rc geninfo_unexecuted_blocks=1 00:32:29.316 00:32:29.316 ' 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:29.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.316 --rc genhtml_branch_coverage=1 00:32:29.316 --rc genhtml_function_coverage=1 00:32:29.316 --rc genhtml_legend=1 00:32:29.316 --rc geninfo_all_blocks=1 00:32:29.316 --rc geninfo_unexecuted_blocks=1 00:32:29.316 00:32:29.316 ' 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:29.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.316 --rc genhtml_branch_coverage=1 00:32:29.316 --rc genhtml_function_coverage=1 00:32:29.316 --rc genhtml_legend=1 00:32:29.316 --rc geninfo_all_blocks=1 00:32:29.316 --rc geninfo_unexecuted_blocks=1 00:32:29.316 00:32:29.316 ' 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:29.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.316 --rc genhtml_branch_coverage=1 00:32:29.316 --rc genhtml_function_coverage=1 00:32:29.316 --rc genhtml_legend=1 00:32:29.316 --rc geninfo_all_blocks=1 00:32:29.316 --rc geninfo_unexecuted_blocks=1 00:32:29.316 00:32:29.316 ' 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:29.316 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:29.317 04:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:34.592 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:34.592 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:34.592 Found net devices under 0000:af:00.0: cvl_0_0 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:34.592 Found net devices under 0000:af:00.1: cvl_0_1 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:34.592 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:34.593 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.852 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.852 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.852 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.852 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:34.852 04:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:34.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:32:34.852 00:32:34.852 --- 10.0.0.2 ping statistics --- 00:32:34.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.852 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:32:34.852 00:32:34.852 --- 10.0.0.1 ping statistics --- 00:32:34.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.852 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.852 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:35.111 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=290537 00:32:35.111 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 290537 00:32:35.111 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:35.111 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 290537 ']' 00:32:35.111 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.111 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.111 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.111 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.111 04:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:35.111 [2024-12-10 04:19:34.188151] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:35.111 [2024-12-10 04:19:34.189139] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:32:35.111 [2024-12-10 04:19:34.189187] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.111 [2024-12-10 04:19:34.271186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:35.111 [2024-12-10 04:19:34.314308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.111 [2024-12-10 04:19:34.314343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.111 [2024-12-10 04:19:34.314351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.111 [2024-12-10 04:19:34.314357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.111 [2024-12-10 04:19:34.314363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.111 [2024-12-10 04:19:34.315763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:35.111 [2024-12-10 04:19:34.315800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:35.111 [2024-12-10 04:19:34.315883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:35.111 [2024-12-10 04:19:34.315885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:35.111 [2024-12-10 04:19:34.384952] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:35.111 [2024-12-10 04:19:34.386099] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:35.111 [2024-12-10 04:19:34.386617] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:35.111 [2024-12-10 04:19:34.386793] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:35.111 [2024-12-10 04:19:34.386844] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:36.048 [2024-12-10 04:19:35.068752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:36.048 Malloc0 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:36.048 [2024-12-10 04:19:35.144964] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:36.048 { 00:32:36.048 "params": { 00:32:36.048 "name": "Nvme$subsystem", 00:32:36.048 "trtype": "$TEST_TRANSPORT", 00:32:36.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:36.048 "adrfam": "ipv4", 00:32:36.048 "trsvcid": "$NVMF_PORT", 00:32:36.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:36.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:36.048 "hdgst": ${hdgst:-false}, 00:32:36.048 "ddgst": ${ddgst:-false} 00:32:36.048 }, 00:32:36.048 "method": "bdev_nvme_attach_controller" 00:32:36.048 } 00:32:36.048 EOF 00:32:36.048 )") 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:36.048 04:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:36.048 "params": { 00:32:36.048 "name": "Nvme1", 00:32:36.048 "trtype": "tcp", 00:32:36.048 "traddr": "10.0.0.2", 00:32:36.048 "adrfam": "ipv4", 00:32:36.048 "trsvcid": "4420", 00:32:36.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:36.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:36.048 "hdgst": false, 00:32:36.048 "ddgst": false 00:32:36.048 }, 00:32:36.048 "method": "bdev_nvme_attach_controller" 00:32:36.048 }' 00:32:36.048 [2024-12-10 04:19:35.194050] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:32:36.048 [2024-12-10 04:19:35.194096] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290738 ] 00:32:36.048 [2024-12-10 04:19:35.269047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:36.048 [2024-12-10 04:19:35.311218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.048 [2024-12-10 04:19:35.311327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.048 [2024-12-10 04:19:35.311327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.306 I/O targets: 00:32:36.306 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:36.306 00:32:36.306 00:32:36.306 CUnit - A unit testing framework for C - Version 2.1-3 00:32:36.306 http://cunit.sourceforge.net/ 00:32:36.306 00:32:36.306 00:32:36.306 Suite: bdevio tests on: Nvme1n1 00:32:36.564 Test: blockdev write read block ...passed 00:32:36.564 Test: blockdev write zeroes read block ...passed 00:32:36.564 Test: blockdev write zeroes read no split ...passed 00:32:36.564 Test: blockdev write zeroes read split ...passed 00:32:36.564 Test: blockdev write zeroes read split partial ...passed 00:32:36.564 Test: blockdev reset ...[2024-12-10 04:19:35.688743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:36.564 [2024-12-10 04:19:35.688806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc8610 (9): Bad file descriptor 00:32:36.564 [2024-12-10 04:19:35.781003] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:36.564 passed 00:32:36.564 Test: blockdev write read 8 blocks ...passed 00:32:36.564 Test: blockdev write read size > 128k ...passed 00:32:36.564 Test: blockdev write read invalid size ...passed 00:32:36.564 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:36.564 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:36.564 Test: blockdev write read max offset ...passed 00:32:36.822 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:36.822 Test: blockdev writev readv 8 blocks ...passed 00:32:36.822 Test: blockdev writev readv 30 x 1block ...passed 00:32:36.822 Test: blockdev writev readv block ...passed 00:32:36.822 Test: blockdev writev readv size > 128k ...passed 00:32:36.822 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:36.822 Test: blockdev comparev and writev ...[2024-12-10 04:19:35.954341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:36.822 [2024-12-10 04:19:35.954371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.822 [2024-12-10 04:19:35.954385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:36.822 [2024-12-10 04:19:35.954397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.822 [2024-12-10 04:19:35.954687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:36.822 [2024-12-10 04:19:35.954698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.822 [2024-12-10 04:19:35.954709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:36.822 [2024-12-10 04:19:35.954716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.822 [2024-12-10 04:19:35.954996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:36.822 [2024-12-10 04:19:35.955007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.822 [2024-12-10 04:19:35.955019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:36.822 [2024-12-10 04:19:35.955026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.822 [2024-12-10 04:19:35.955312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:36.822 [2024-12-10 04:19:35.955324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.822 [2024-12-10 04:19:35.955336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:36.823 [2024-12-10 04:19:35.955343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.823 passed 00:32:36.823 Test: blockdev nvme passthru rw ...passed 00:32:36.823 Test: blockdev nvme passthru vendor specific ...[2024-12-10 04:19:36.038457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:36.823 [2024-12-10 04:19:36.038475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.823 [2024-12-10 04:19:36.038584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:36.823 [2024-12-10 04:19:36.038594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:36.823 [2024-12-10 04:19:36.038699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:36.823 [2024-12-10 04:19:36.038709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:36.823 [2024-12-10 04:19:36.038813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:36.823 [2024-12-10 04:19:36.038823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.823 passed 00:32:36.823 Test: blockdev nvme admin passthru ...passed 00:32:36.823 Test: blockdev copy ...passed 00:32:36.823 00:32:36.823 Run Summary: Type Total Ran Passed Failed Inactive 00:32:36.823 suites 1 1 n/a 0 0 00:32:36.823 tests 23 23 23 0 0 00:32:36.823 asserts 152 152 152 0 n/a 00:32:36.823 00:32:36.823 Elapsed time = 1.024 seconds 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:37.095 rmmod nvme_tcp 00:32:37.095 rmmod nvme_fabrics 00:32:37.095 rmmod nvme_keyring 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 290537 ']' 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 290537 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 290537 ']' 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 290537 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290537 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290537' 00:32:37.095 killing process with pid 290537 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 290537 00:32:37.095 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 290537 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.406 04:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.356 04:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:39.356 00:32:39.356 real 0m10.518s 00:32:39.356 user 0m9.031s 00:32:39.356 sys 0m5.208s 00:32:39.356 04:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.356 04:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:39.356 ************************************ 00:32:39.356 END TEST nvmf_bdevio 00:32:39.356 ************************************ 00:32:39.615 04:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:39.615 00:32:39.615 real 4m34.188s 00:32:39.615 user 9m10.259s 00:32:39.615 sys 1m50.539s 00:32:39.615 04:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.615 04:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:39.615 ************************************ 00:32:39.615 END TEST nvmf_target_core_interrupt_mode 00:32:39.615 ************************************ 00:32:39.615 04:19:38 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:39.615 04:19:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:39.615 04:19:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.615 04:19:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.615 ************************************ 00:32:39.615 START TEST nvmf_interrupt 00:32:39.615 ************************************ 00:32:39.615 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:39.615 * Looking for test storage... 00:32:39.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.615 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:39.615 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:39.615 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.875 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:39.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.875 --rc genhtml_branch_coverage=1 00:32:39.875 --rc genhtml_function_coverage=1 00:32:39.875 --rc genhtml_legend=1 00:32:39.875 --rc geninfo_all_blocks=1 00:32:39.875 --rc geninfo_unexecuted_blocks=1 00:32:39.875 00:32:39.876 ' 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:39.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.876 --rc genhtml_branch_coverage=1 00:32:39.876 --rc genhtml_function_coverage=1 00:32:39.876 --rc genhtml_legend=1 00:32:39.876 --rc geninfo_all_blocks=1 00:32:39.876 --rc geninfo_unexecuted_blocks=1 00:32:39.876 00:32:39.876 ' 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:39.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.876 --rc genhtml_branch_coverage=1 00:32:39.876 --rc genhtml_function_coverage=1 00:32:39.876 --rc genhtml_legend=1 00:32:39.876 --rc geninfo_all_blocks=1 00:32:39.876 --rc geninfo_unexecuted_blocks=1 00:32:39.876 00:32:39.876 ' 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:39.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.876 --rc genhtml_branch_coverage=1 00:32:39.876 --rc genhtml_function_coverage=1 00:32:39.876 --rc genhtml_legend=1 00:32:39.876 --rc geninfo_all_blocks=1 00:32:39.876 --rc geninfo_unexecuted_blocks=1 00:32:39.876 00:32:39.876 ' 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.876 04:19:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:46.447 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:46.447 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:46.447 Found net devices under 0000:af:00.0: cvl_0_0 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:46.447 Found net devices under 0000:af:00.1: cvl_0_1 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.447 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:32:46.448 00:32:46.448 --- 10.0.0.2 ping statistics --- 00:32:46.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.448 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:32:46.448 00:32:46.448 --- 10.0.0.1 ping statistics --- 00:32:46.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.448 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=294442 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 294442 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 294442 ']' 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.448 04:19:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:46.448 [2024-12-10 04:19:44.944823] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:46.448 [2024-12-10 04:19:44.945728] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:32:46.448 [2024-12-10 04:19:44.945760] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.448 [2024-12-10 04:19:45.024689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:46.448 [2024-12-10 04:19:45.064033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.448 [2024-12-10 04:19:45.064069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.448 [2024-12-10 04:19:45.064076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.448 [2024-12-10 04:19:45.064083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.448 [2024-12-10 04:19:45.064087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.448 [2024-12-10 04:19:45.065174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.448 [2024-12-10 04:19:45.065178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.448 [2024-12-10 04:19:45.131808] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:46.448 [2024-12-10 04:19:45.132351] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:46.448 [2024-12-10 04:19:45.132575] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:46.448 5000+0 records in 00:32:46.448 5000+0 records out 00:32:46.448 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0175583 s, 583 MB/s 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:46.448 AIO0 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:46.448 [2024-12-10 04:19:45.253933] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:46.448 [2024-12-10 04:19:45.294292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 294442 0 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 294442 0 idle 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=294442 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 294442 -w 256 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 294442 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.24 reactor_0' 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 294442 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.24 reactor_0 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:46.448 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 294442 1 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 294442 1 idle 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=294442 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 294442 -w 256 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 294449 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 294449 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=294541 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 294442 0 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 294442 0 busy 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=294442 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 294442 -w 256 00:32:46.449 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 294442 root 20 0 128.2g 46848 33792 R 73.3 0.0 0:00.35 reactor_0' 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 294442 root 20 0 128.2g 46848 33792 R 73.3 0.0 0:00.35 reactor_0 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 294442 1 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 294442 1 busy 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=294442 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 294442 -w 256 00:32:46.707 04:19:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:46.965 04:19:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 294449 root 20 0 128.2g 46848 33792 R 93.8 0.0 0:00.23 reactor_1' 00:32:46.965 04:19:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 294449 root 20 0 128.2g 46848 33792 R 93.8 0.0 0:00.23 reactor_1 00:32:46.965 04:19:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:46.965 04:19:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:46.965 04:19:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:32:46.965 04:19:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:32:46.965 04:19:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:46.965 04:19:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:46.965 04:19:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:46.965 04:19:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:46.965 04:19:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 294541 00:32:56.934 Initializing NVMe Controllers 00:32:56.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:56.934 Controller IO queue size 256, less than required. 00:32:56.934 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:56.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:56.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:56.934 Initialization complete. Launching workers. 00:32:56.934 ======================================================== 00:32:56.934 Latency(us) 00:32:56.934 Device Information : IOPS MiB/s Average min max 00:32:56.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16672.00 65.12 15362.23 3198.16 30682.64 00:32:56.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16527.20 64.56 15495.37 6481.35 31009.49 00:32:56.934 ======================================================== 00:32:56.934 Total : 33199.20 129.68 15428.51 3198.16 31009.49 00:32:56.934 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 294442 0 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 294442 0 idle 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=294442 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 294442 -w 256 00:32:56.934 04:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 294442 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0' 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 294442 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 294442 1 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 294442 1 idle 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=294442 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 294442 -w 256 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 294449 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 294449 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:56.934 04:19:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:57.502 04:19:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:57.502 04:19:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:57.502 04:19:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:57.502 04:19:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:57.502 04:19:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 294442 0 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 294442 0 idle 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=294442 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 294442 -w 256 00:32:59.406 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 294442 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.47 reactor_0' 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 294442 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.47 reactor_0 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 294442 1 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 294442 1 idle 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=294442 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 294442 -w 256 00:32:59.664 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:59.924 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 294449 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.09 reactor_1' 00:32:59.924 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 294449 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.09 reactor_1 00:32:59.924 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:59.924 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:59.924 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:59.924 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:59.924 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:59.924 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:59.924 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:59.924 04:19:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:59.924 04:19:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:59.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.924 rmmod nvme_tcp 00:32:59.924 rmmod nvme_fabrics 00:32:59.924 rmmod nvme_keyring 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 294442 ']' 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 294442 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 294442 ']' 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 294442 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.924 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294442 00:33:00.183 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:00.183 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:00.183 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294442' 00:33:00.183 killing process with pid 294442 00:33:00.183 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 294442 00:33:00.183 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 294442 00:33:00.183 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:00.183 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:00.183 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:00.184 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:00.184 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:00.184 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:00.184 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:00.184 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:00.184 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:00.184 04:19:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.184 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:00.184 04:19:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.720 04:20:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.720 00:33:02.720 real 0m22.722s 00:33:02.720 user 0m39.704s 00:33:02.720 sys 0m8.235s 00:33:02.720 04:20:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.720 04:20:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:02.720 ************************************ 00:33:02.720 END TEST nvmf_interrupt 00:33:02.720 ************************************ 00:33:02.720 00:33:02.720 real 27m20.232s 00:33:02.720 user 56m15.821s 00:33:02.720 sys 9m18.776s 00:33:02.720 04:20:01 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.720 04:20:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.720 ************************************ 00:33:02.720 END TEST nvmf_tcp 00:33:02.720 ************************************ 00:33:02.720 04:20:01 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:02.720 04:20:01 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:02.720 04:20:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:02.720 04:20:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:02.720 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:33:02.720 ************************************ 00:33:02.720 START TEST spdkcli_nvmf_tcp 00:33:02.720 ************************************ 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:02.720 * Looking for test storage... 00:33:02.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:02.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.720 --rc genhtml_branch_coverage=1 00:33:02.720 --rc genhtml_function_coverage=1 00:33:02.720 --rc genhtml_legend=1 00:33:02.720 --rc geninfo_all_blocks=1 00:33:02.720 --rc geninfo_unexecuted_blocks=1 00:33:02.720 00:33:02.720 ' 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:02.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.720 --rc genhtml_branch_coverage=1 00:33:02.720 --rc genhtml_function_coverage=1 00:33:02.720 --rc genhtml_legend=1 00:33:02.720 --rc geninfo_all_blocks=1 00:33:02.720 --rc geninfo_unexecuted_blocks=1 00:33:02.720 00:33:02.720 ' 00:33:02.720 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:02.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.721 --rc genhtml_branch_coverage=1 00:33:02.721 --rc genhtml_function_coverage=1 00:33:02.721 --rc genhtml_legend=1 00:33:02.721 --rc geninfo_all_blocks=1 00:33:02.721 --rc geninfo_unexecuted_blocks=1 00:33:02.721 00:33:02.721 ' 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:02.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.721 --rc genhtml_branch_coverage=1 00:33:02.721 --rc genhtml_function_coverage=1 00:33:02.721 --rc genhtml_legend=1 00:33:02.721 --rc geninfo_all_blocks=1 00:33:02.721 --rc geninfo_unexecuted_blocks=1 00:33:02.721 00:33:02.721 ' 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:02.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=297314 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 297314 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 297314 ']' 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:02.721 04:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.721 [2024-12-10 04:20:01.833375] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:33:02.721 [2024-12-10 04:20:01.833423] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid297314 ] 00:33:02.721 [2024-12-10 04:20:01.903997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:02.721 [2024-12-10 04:20:01.943495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.721 [2024-12-10 04:20:01.943496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.980 04:20:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:02.980 04:20:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:02.980 04:20:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:02.980 04:20:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:02.980 04:20:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.980 04:20:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:02.980 04:20:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:02.980 04:20:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:02.980 04:20:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:02.980 04:20:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.980 04:20:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:02.980 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:02.980 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:02.980 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:02.980 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:02.980 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:02.980 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:02.980 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:02.980 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:02.980 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:02.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:02.980 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:02.980 ' 00:33:05.515 [2024-12-10 04:20:04.781671] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.891 [2024-12-10 04:20:06.122124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:09.425 [2024-12-10 04:20:08.605837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:11.957 [2024-12-10 04:20:10.780784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:13.331 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:13.331 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:13.331 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:13.331 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:13.331 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:13.331 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:13.331 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:13.331 04:20:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:13.331 04:20:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.331 04:20:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.331 04:20:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:13.331 04:20:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:13.331 04:20:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.331 04:20:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:13.331 04:20:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:13.897 04:20:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:13.897 04:20:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:13.897 04:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:13.897 04:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.897 04:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.897 04:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:13.897 04:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:13.897 04:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.897 04:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:13.897 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:13.897 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:13.897 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:13.897 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:13.897 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:13.897 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:13.897 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:13.897 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:13.897 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:13.897 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:13.897 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:13.897 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:13.897 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:13.897 ' 00:33:20.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:20.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:20.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:20.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:20.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:20.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:20.459 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:20.459 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:20.459 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:20.459 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:20.459 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:20.459 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:20.459 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:20.459 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 297314 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 297314 ']' 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 297314 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 297314 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 297314' 00:33:20.459 killing process with pid 297314 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 297314 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 297314 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 297314 ']' 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 297314 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 297314 ']' 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 297314 00:33:20.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (297314) - No such process 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 297314 is not found' 00:33:20.459 Process with pid 297314 is not found 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:20.459 00:33:20.459 real 0m17.352s 00:33:20.459 user 0m38.256s 00:33:20.459 sys 0m0.787s 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:20.459 04:20:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:20.459 ************************************ 00:33:20.459 END TEST spdkcli_nvmf_tcp 00:33:20.459 ************************************ 00:33:20.459 04:20:18 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:20.459 04:20:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:20.459 04:20:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:20.459 04:20:18 -- common/autotest_common.sh@10 -- # set +x 00:33:20.459 ************************************ 00:33:20.459 START TEST nvmf_identify_passthru 00:33:20.459 ************************************ 00:33:20.459 04:20:18 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:20.459 * Looking for test storage... 00:33:20.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:20.459 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:20.459 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:33:20.459 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:20.459 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:20.459 04:20:19 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:20.459 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:20.459 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:20.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.459 --rc genhtml_branch_coverage=1 00:33:20.460 --rc genhtml_function_coverage=1 00:33:20.460 --rc genhtml_legend=1 00:33:20.460 --rc geninfo_all_blocks=1 00:33:20.460 --rc geninfo_unexecuted_blocks=1 00:33:20.460 00:33:20.460 ' 00:33:20.460 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:20.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.460 --rc genhtml_branch_coverage=1 00:33:20.460 --rc genhtml_function_coverage=1 00:33:20.460 --rc genhtml_legend=1 00:33:20.460 --rc geninfo_all_blocks=1 00:33:20.460 --rc geninfo_unexecuted_blocks=1 00:33:20.460 00:33:20.460 ' 00:33:20.460 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:20.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.460 --rc genhtml_branch_coverage=1 00:33:20.460 --rc genhtml_function_coverage=1 00:33:20.460 --rc genhtml_legend=1 00:33:20.460 --rc geninfo_all_blocks=1 00:33:20.460 --rc geninfo_unexecuted_blocks=1 00:33:20.460 00:33:20.460 ' 00:33:20.460 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:20.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.460 --rc genhtml_branch_coverage=1 00:33:20.460 --rc genhtml_function_coverage=1 00:33:20.460 --rc genhtml_legend=1 00:33:20.460 --rc geninfo_all_blocks=1 00:33:20.460 --rc geninfo_unexecuted_blocks=1 00:33:20.460 00:33:20.460 ' 00:33:20.460 04:20:19 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:20.460 04:20:19 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:20.460 04:20:19 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:20.460 04:20:19 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.460 04:20:19 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.460 04:20:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.460 04:20:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.460 04:20:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.460 04:20:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:20.460 04:20:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:20.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:20.460 04:20:19 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:20.460 04:20:19 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:20.460 04:20:19 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:20.460 04:20:19 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.460 04:20:19 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.460 04:20:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.460 04:20:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.460 04:20:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.460 04:20:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:20.460 04:20:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.460 04:20:19 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:20.460 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.460 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:20.460 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.461 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:20.461 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:20.461 04:20:19 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:20.461 04:20:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:25.736 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:25.736 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:25.736 Found net devices under 0000:af:00.0: cvl_0_0 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:25.736 Found net devices under 0000:af:00.1: cvl_0_1 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:25.736 04:20:24 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.736 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.736 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.736 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.993 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:33:25.993 00:33:25.993 --- 10.0.0.2 ping statistics --- 00:33:25.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.993 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:33:25.993 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:33:25.993 00:33:25.993 --- 10.0.0.1 ping statistics --- 00:33:25.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.993 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:33:25.993 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.993 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:25.993 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:25.993 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.993 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:25.993 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:25.993 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.993 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:25.993 04:20:25 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:25.993 04:20:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:25.993 04:20:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:25.993 04:20:25 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:25.993 04:20:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:25.993 04:20:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:25.993 04:20:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:25.993 04:20:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:25.993 04:20:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:30.174 04:20:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:33:30.174 04:20:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:30.174 04:20:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:30.174 04:20:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:34.354 04:20:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:34.354 04:20:33 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:34.354 04:20:33 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.354 04:20:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.354 04:20:33 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:34.354 04:20:33 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:34.354 04:20:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.354 04:20:33 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=304418 00:33:34.354 04:20:33 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:34.354 04:20:33 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:34.354 04:20:33 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 304418 00:33:34.354 04:20:33 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 304418 ']' 00:33:34.354 04:20:33 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.354 04:20:33 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:34.354 04:20:33 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.354 04:20:33 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:34.354 04:20:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.354 [2024-12-10 04:20:33.584194] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:33:34.354 [2024-12-10 04:20:33.584243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:34.612 [2024-12-10 04:20:33.661476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:34.612 [2024-12-10 04:20:33.704360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:34.612 [2024-12-10 04:20:33.704398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:34.612 [2024-12-10 04:20:33.704405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:34.612 [2024-12-10 04:20:33.704411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:34.612 [2024-12-10 04:20:33.704416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:34.612 [2024-12-10 04:20:33.709186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.612 [2024-12-10 04:20:33.709221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:34.612 [2024-12-10 04:20:33.709326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.612 [2024-12-10 04:20:33.709327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:35.177 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:35.177 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:35.177 04:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:35.177 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.177 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:35.177 INFO: Log level set to 20 00:33:35.177 INFO: Requests: 00:33:35.177 { 00:33:35.177 "jsonrpc": "2.0", 00:33:35.177 "method": "nvmf_set_config", 00:33:35.177 "id": 1, 00:33:35.177 "params": { 00:33:35.177 "admin_cmd_passthru": { 00:33:35.177 "identify_ctrlr": true 00:33:35.177 } 00:33:35.177 } 00:33:35.177 } 00:33:35.177 00:33:35.177 INFO: response: 00:33:35.177 { 00:33:35.177 "jsonrpc": "2.0", 00:33:35.177 "id": 1, 00:33:35.177 "result": true 00:33:35.177 } 00:33:35.177 00:33:35.177 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.177 04:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:35.177 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.177 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:35.177 INFO: Setting log level to 20 00:33:35.177 INFO: Setting log level to 20 00:33:35.177 INFO: Log level set to 20 00:33:35.177 INFO: Log level set to 20 00:33:35.177 INFO: Requests: 00:33:35.177 { 00:33:35.177 "jsonrpc": "2.0", 00:33:35.177 "method": "framework_start_init", 00:33:35.177 "id": 1 00:33:35.177 } 00:33:35.177 00:33:35.177 INFO: Requests: 00:33:35.177 { 00:33:35.177 "jsonrpc": "2.0", 00:33:35.177 "method": "framework_start_init", 00:33:35.177 "id": 1 00:33:35.177 } 00:33:35.177 00:33:35.435 [2024-12-10 04:20:34.516059] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:35.435 INFO: response: 00:33:35.435 { 00:33:35.435 "jsonrpc": "2.0", 00:33:35.435 "id": 1, 00:33:35.435 "result": true 00:33:35.435 } 00:33:35.435 00:33:35.435 INFO: response: 00:33:35.435 { 00:33:35.435 "jsonrpc": "2.0", 00:33:35.435 "id": 1, 00:33:35.435 "result": true 00:33:35.435 } 00:33:35.435 00:33:35.435 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.435 04:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:35.435 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.435 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:35.435 INFO: Setting log level to 40 00:33:35.435 INFO: Setting log level to 40 00:33:35.435 INFO: Setting log level to 40 00:33:35.435 [2024-12-10 04:20:34.529369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:35.435 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.435 04:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:35.435 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:35.435 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:35.435 04:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:35.435 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.435 04:20:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:38.714 Nvme0n1 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.714 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.714 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.714 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:38.714 [2024-12-10 04:20:37.446548] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.714 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:38.714 [ 00:33:38.714 { 00:33:38.714 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:38.714 "subtype": "Discovery", 00:33:38.714 "listen_addresses": [], 00:33:38.714 "allow_any_host": true, 00:33:38.714 "hosts": [] 00:33:38.714 }, 00:33:38.714 { 00:33:38.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:38.714 "subtype": "NVMe", 00:33:38.714 "listen_addresses": [ 00:33:38.714 { 00:33:38.714 "trtype": "TCP", 00:33:38.714 "adrfam": "IPv4", 00:33:38.714 "traddr": "10.0.0.2", 00:33:38.714 "trsvcid": "4420" 00:33:38.714 } 00:33:38.714 ], 00:33:38.714 "allow_any_host": true, 00:33:38.714 "hosts": [], 00:33:38.714 "serial_number": "SPDK00000000000001", 00:33:38.714 "model_number": "SPDK bdev Controller", 00:33:38.714 "max_namespaces": 1, 00:33:38.714 "min_cntlid": 1, 00:33:38.714 "max_cntlid": 65519, 00:33:38.714 "namespaces": [ 00:33:38.714 { 00:33:38.714 "nsid": 1, 00:33:38.714 "bdev_name": "Nvme0n1", 00:33:38.714 "name": "Nvme0n1", 00:33:38.714 "nguid": "605A50C25C984C3F9B5370D4B2AB20B8", 00:33:38.714 "uuid": "605a50c2-5c98-4c3f-9b53-70d4b2ab20b8" 00:33:38.714 } 00:33:38.714 ] 00:33:38.714 } 00:33:38.714 ] 00:33:38.714 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.714 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:38.714 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:38.714 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:38.715 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:33:38.715 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:38.715 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:38.715 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:38.715 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:38.715 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:33:38.715 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:38.715 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.715 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:38.715 04:20:37 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:38.715 04:20:37 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:38.715 04:20:37 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:38.715 04:20:37 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:38.715 04:20:37 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:38.715 04:20:37 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:38.715 04:20:37 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:38.715 rmmod nvme_tcp 00:33:38.715 rmmod nvme_fabrics 00:33:38.715 rmmod nvme_keyring 00:33:38.715 04:20:37 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:38.715 04:20:37 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:38.715 04:20:37 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:38.715 04:20:37 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 304418 ']' 00:33:38.715 04:20:37 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 304418 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 304418 ']' 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 304418 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304418 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304418' 00:33:38.715 killing process with pid 304418 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 304418 00:33:38.715 04:20:37 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 304418 00:33:40.092 04:20:39 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:40.092 04:20:39 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:40.092 04:20:39 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:40.092 04:20:39 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:40.092 04:20:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:40.092 04:20:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:40.092 04:20:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:40.092 04:20:39 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:40.092 04:20:39 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:40.092 04:20:39 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.092 04:20:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:40.092 04:20:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.626 04:20:41 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:42.626 00:33:42.626 real 0m22.409s 00:33:42.626 user 0m29.359s 00:33:42.626 sys 0m6.148s 00:33:42.626 04:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:42.626 04:20:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:42.626 ************************************ 00:33:42.626 END TEST nvmf_identify_passthru 00:33:42.626 ************************************ 00:33:42.626 04:20:41 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:42.626 04:20:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:42.626 04:20:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:42.626 04:20:41 -- common/autotest_common.sh@10 -- # set +x 00:33:42.626 ************************************ 00:33:42.626 START TEST nvmf_dif 00:33:42.626 ************************************ 00:33:42.626 04:20:41 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:42.626 * Looking for test storage... 00:33:42.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:42.626 04:20:41 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:42.626 04:20:41 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:33:42.626 04:20:41 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:42.626 04:20:41 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:42.626 04:20:41 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:42.627 04:20:41 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:42.627 04:20:41 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:42.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.627 --rc genhtml_branch_coverage=1 00:33:42.627 --rc genhtml_function_coverage=1 00:33:42.627 --rc genhtml_legend=1 00:33:42.627 --rc geninfo_all_blocks=1 00:33:42.627 --rc geninfo_unexecuted_blocks=1 00:33:42.627 00:33:42.627 ' 00:33:42.627 04:20:41 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:42.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.627 --rc genhtml_branch_coverage=1 00:33:42.627 --rc genhtml_function_coverage=1 00:33:42.627 --rc genhtml_legend=1 00:33:42.627 --rc geninfo_all_blocks=1 00:33:42.627 --rc geninfo_unexecuted_blocks=1 00:33:42.627 00:33:42.627 ' 00:33:42.627 04:20:41 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:42.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.627 --rc genhtml_branch_coverage=1 00:33:42.627 --rc genhtml_function_coverage=1 00:33:42.627 --rc genhtml_legend=1 00:33:42.627 --rc geninfo_all_blocks=1 00:33:42.627 --rc geninfo_unexecuted_blocks=1 00:33:42.627 00:33:42.627 ' 00:33:42.627 04:20:41 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:42.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.627 --rc genhtml_branch_coverage=1 00:33:42.627 --rc genhtml_function_coverage=1 00:33:42.627 --rc genhtml_legend=1 00:33:42.627 --rc geninfo_all_blocks=1 00:33:42.627 --rc geninfo_unexecuted_blocks=1 00:33:42.627 00:33:42.627 ' 00:33:42.627 04:20:41 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:42.627 04:20:41 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:42.627 04:20:41 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.627 04:20:41 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.627 04:20:41 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.627 04:20:41 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:42.627 04:20:41 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:42.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:42.627 04:20:41 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:42.627 04:20:41 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:42.627 04:20:41 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:42.627 04:20:41 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:42.627 04:20:41 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.627 04:20:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:42.627 04:20:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:42.627 04:20:41 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:42.627 04:20:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:49.338 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:49.338 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:49.338 Found net devices under 0000:af:00.0: cvl_0_0 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:49.338 Found net devices under 0000:af:00.1: cvl_0_1 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:49.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:49.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:33:49.338 00:33:49.338 --- 10.0.0.2 ping statistics --- 00:33:49.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.338 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:49.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:49.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:33:49.338 00:33:49.338 --- 10.0.0.1 ping statistics --- 00:33:49.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.338 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:49.338 04:20:47 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:51.267 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:51.267 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:51.267 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:51.267 04:20:50 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:51.267 04:20:50 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:51.267 04:20:50 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:51.267 04:20:50 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:51.267 04:20:50 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:51.267 04:20:50 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:51.267 04:20:50 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:51.267 04:20:50 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:51.267 04:20:50 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:51.267 04:20:50 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:51.267 04:20:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:51.267 04:20:50 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=309999 00:33:51.267 04:20:50 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:51.267 04:20:50 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 309999 00:33:51.267 04:20:50 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 309999 ']' 00:33:51.267 04:20:50 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.267 04:20:50 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.267 04:20:50 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.267 04:20:50 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.267 04:20:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:51.267 [2024-12-10 04:20:50.439926] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:33:51.267 [2024-12-10 04:20:50.439971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.267 [2024-12-10 04:20:50.499943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.267 [2024-12-10 04:20:50.542264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:51.267 [2024-12-10 04:20:50.542297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:51.267 [2024-12-10 04:20:50.542304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:51.267 [2024-12-10 04:20:50.542311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:51.267 [2024-12-10 04:20:50.542316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:51.267 [2024-12-10 04:20:50.542801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.527 04:20:50 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.527 04:20:50 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:51.527 04:20:50 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:51.527 04:20:50 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:51.527 04:20:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:51.527 04:20:50 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.527 04:20:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:51.527 04:20:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:51.527 04:20:50 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.527 04:20:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:51.527 [2024-12-10 04:20:50.679399] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:51.527 04:20:50 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.527 04:20:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:51.527 04:20:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:51.527 04:20:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.527 04:20:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:51.527 ************************************ 00:33:51.527 START TEST fio_dif_1_default 00:33:51.527 ************************************ 00:33:51.527 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:51.527 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:51.527 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:51.527 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.527 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:51.527 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:51.527 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:51.527 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.527 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:51.527 bdev_null0 00:33:51.527 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.527 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:51.528 [2024-12-10 04:20:50.755740] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:51.528 { 00:33:51.528 "params": { 00:33:51.528 "name": "Nvme$subsystem", 00:33:51.528 "trtype": "$TEST_TRANSPORT", 00:33:51.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.528 "adrfam": "ipv4", 00:33:51.528 "trsvcid": "$NVMF_PORT", 00:33:51.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.528 "hdgst": ${hdgst:-false}, 00:33:51.528 "ddgst": ${ddgst:-false} 00:33:51.528 }, 00:33:51.528 "method": "bdev_nvme_attach_controller" 00:33:51.528 } 00:33:51.528 EOF 00:33:51.528 )") 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:51.528 "params": { 00:33:51.528 "name": "Nvme0", 00:33:51.528 "trtype": "tcp", 00:33:51.528 "traddr": "10.0.0.2", 00:33:51.528 "adrfam": "ipv4", 00:33:51.528 "trsvcid": "4420", 00:33:51.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:51.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:51.528 "hdgst": false, 00:33:51.528 "ddgst": false 00:33:51.528 }, 00:33:51.528 "method": "bdev_nvme_attach_controller" 00:33:51.528 }' 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:51.528 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.803 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.803 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.803 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:51.803 04:20:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.068 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:52.068 fio-3.35 00:33:52.068 Starting 1 thread 00:34:04.278 00:34:04.278 filename0: (groupid=0, jobs=1): err= 0: pid=310237: Tue Dec 10 04:21:01 2024 00:34:04.278 read: IOPS=218, BW=875KiB/s (896kB/s)(8768KiB/10016msec) 00:34:04.278 slat (nsec): min=5542, max=26358, avg=6102.47, stdev=794.01 00:34:04.278 clat (usec): min=372, max=45532, avg=18259.68, stdev=20197.56 00:34:04.278 lat (usec): min=378, max=45558, avg=18265.78, stdev=20197.53 00:34:04.278 clat percentiles (usec): 00:34:04.278 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 412], 00:34:04.278 | 30.00th=[ 420], 40.00th=[ 433], 50.00th=[ 545], 60.00th=[40633], 00:34:04.278 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:04.278 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:34:04.278 | 99.99th=[45351] 00:34:04.278 bw ( KiB/s): min= 672, max= 1056, per=99.95%, avg=875.20, stdev=89.48, samples=20 00:34:04.278 iops : min= 168, max= 264, avg=218.80, stdev=22.37, samples=20 00:34:04.278 lat (usec) : 500=48.45%, 750=7.76% 00:34:04.278 lat (msec) : 50=43.80% 00:34:04.278 cpu : usr=92.44%, sys=7.31%, ctx=10, majf=0, minf=0 00:34:04.278 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.278 issued rwts: total=2192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.278 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:04.278 00:34:04.278 Run status group 0 (all jobs): 00:34:04.278 READ: bw=875KiB/s (896kB/s), 875KiB/s-875KiB/s (896kB/s-896kB/s), io=8768KiB (8978kB), run=10016-10016msec 00:34:04.278 04:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:04.278 04:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:04.278 04:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:04.278 04:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:04.278 04:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:04.278 04:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:04.278 04:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.278 04:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:04.278 04:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.279 00:34:04.279 real 0m11.164s 00:34:04.279 user 0m16.178s 00:34:04.279 sys 0m1.013s 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:04.279 ************************************ 00:34:04.279 END TEST fio_dif_1_default 00:34:04.279 ************************************ 00:34:04.279 04:21:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:04.279 04:21:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:04.279 04:21:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.279 04:21:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:04.279 ************************************ 00:34:04.279 START TEST fio_dif_1_multi_subsystems 00:34:04.279 ************************************ 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.279 bdev_null0 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.279 [2024-12-10 04:21:01.986386] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.279 04:21:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.279 bdev_null1 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:04.279 { 00:34:04.279 "params": { 00:34:04.279 "name": "Nvme$subsystem", 00:34:04.279 "trtype": "$TEST_TRANSPORT", 00:34:04.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:04.279 "adrfam": "ipv4", 00:34:04.279 "trsvcid": "$NVMF_PORT", 00:34:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:04.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:04.279 "hdgst": ${hdgst:-false}, 00:34:04.279 "ddgst": ${ddgst:-false} 00:34:04.279 }, 00:34:04.279 "method": "bdev_nvme_attach_controller" 00:34:04.279 } 00:34:04.279 EOF 00:34:04.279 )") 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:04.279 { 00:34:04.279 "params": { 00:34:04.279 "name": "Nvme$subsystem", 00:34:04.279 "trtype": "$TEST_TRANSPORT", 00:34:04.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:04.279 "adrfam": "ipv4", 00:34:04.279 "trsvcid": "$NVMF_PORT", 00:34:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:04.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:04.279 "hdgst": ${hdgst:-false}, 00:34:04.279 "ddgst": ${ddgst:-false} 00:34:04.279 }, 00:34:04.279 "method": "bdev_nvme_attach_controller" 00:34:04.279 } 00:34:04.279 EOF 00:34:04.279 )") 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:04.279 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:04.279 "params": { 00:34:04.279 "name": "Nvme0", 00:34:04.279 "trtype": "tcp", 00:34:04.279 "traddr": "10.0.0.2", 00:34:04.279 "adrfam": "ipv4", 00:34:04.279 "trsvcid": "4420", 00:34:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:04.279 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:04.279 "hdgst": false, 00:34:04.279 "ddgst": false 00:34:04.279 }, 00:34:04.279 "method": "bdev_nvme_attach_controller" 00:34:04.279 },{ 00:34:04.279 "params": { 00:34:04.279 "name": "Nvme1", 00:34:04.279 "trtype": "tcp", 00:34:04.279 "traddr": "10.0.0.2", 00:34:04.279 "adrfam": "ipv4", 00:34:04.279 "trsvcid": "4420", 00:34:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:04.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:04.280 "hdgst": false, 00:34:04.280 "ddgst": false 00:34:04.280 }, 00:34:04.280 "method": "bdev_nvme_attach_controller" 00:34:04.280 }' 00:34:04.280 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:04.280 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:04.280 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:04.280 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:04.280 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:04.280 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:04.280 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:04.280 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:04.280 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:04.280 04:21:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:04.280 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:04.280 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:04.280 fio-3.35 00:34:04.280 Starting 2 threads 00:34:14.267 00:34:14.268 filename0: (groupid=0, jobs=1): err= 0: pid=312349: Tue Dec 10 04:21:13 2024 00:34:14.268 read: IOPS=104, BW=419KiB/s (429kB/s)(4208KiB/10039msec) 00:34:14.268 slat (nsec): min=5721, max=54616, avg=14306.03, stdev=9690.34 00:34:14.268 clat (usec): min=413, max=42571, avg=38127.51, stdev=11641.51 00:34:14.268 lat (usec): min=420, max=42578, avg=38141.81, stdev=11641.38 00:34:14.268 clat percentiles (usec): 00:34:14.268 | 1.00th=[ 429], 5.00th=[ 611], 10.00th=[40633], 20.00th=[41157], 00:34:14.268 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:14.268 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:14.268 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:14.268 | 99.99th=[42730] 00:34:14.268 bw ( KiB/s): min= 352, max= 512, per=52.16%, avg=419.20, stdev=42.68, samples=20 00:34:14.268 iops : min= 88, max= 128, avg=104.80, stdev=10.67, samples=20 00:34:14.268 lat (usec) : 500=3.04%, 750=5.61%, 1000=0.10% 00:34:14.268 lat (msec) : 50=91.25% 00:34:14.268 cpu : usr=99.01%, sys=0.70%, ctx=6, majf=0, minf=137 00:34:14.268 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:14.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.268 issued rwts: total=1052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.268 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:14.268 filename1: (groupid=0, jobs=1): err= 0: pid=312350: Tue Dec 10 04:21:13 2024 00:34:14.268 read: IOPS=96, BW=384KiB/s (393kB/s)(3856KiB/10036msec) 00:34:14.268 slat (nsec): min=6027, max=55000, avg=15462.15, stdev=10160.43 00:34:14.268 clat (usec): min=571, max=42382, avg=41597.44, stdev=2678.94 00:34:14.268 lat (usec): min=577, max=42420, avg=41612.90, stdev=2678.69 00:34:14.268 clat percentiles (usec): 00:34:14.268 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:14.268 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:14.268 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:14.268 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:14.268 | 99.99th=[42206] 00:34:14.268 bw ( KiB/s): min= 352, max= 416, per=47.80%, avg=384.00, stdev=14.68, samples=20 00:34:14.268 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:34:14.268 lat (usec) : 750=0.41% 00:34:14.268 lat (msec) : 50=99.59% 00:34:14.268 cpu : usr=97.29%, sys=2.40%, ctx=28, majf=0, minf=159 00:34:14.268 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:14.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.268 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.268 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:14.268 00:34:14.268 Run status group 0 (all jobs): 00:34:14.268 READ: bw=803KiB/s (823kB/s), 384KiB/s-419KiB/s (393kB/s-429kB/s), io=8064KiB (8258kB), run=10036-10039msec 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.268 00:34:14.268 real 0m11.507s 00:34:14.268 user 0m26.988s 00:34:14.268 sys 0m0.667s 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.268 04:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:14.268 ************************************ 00:34:14.268 END TEST fio_dif_1_multi_subsystems 00:34:14.268 ************************************ 00:34:14.268 04:21:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:14.268 04:21:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:14.268 04:21:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.268 04:21:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:14.268 ************************************ 00:34:14.268 START TEST fio_dif_rand_params 00:34:14.268 ************************************ 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.268 bdev_null0 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.268 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:14.528 [2024-12-10 04:21:13.564448] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:14.528 { 00:34:14.528 "params": { 00:34:14.528 "name": "Nvme$subsystem", 00:34:14.528 "trtype": "$TEST_TRANSPORT", 00:34:14.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:14.528 "adrfam": "ipv4", 00:34:14.528 "trsvcid": "$NVMF_PORT", 00:34:14.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:14.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:14.528 "hdgst": ${hdgst:-false}, 00:34:14.528 "ddgst": ${ddgst:-false} 00:34:14.528 }, 00:34:14.528 "method": "bdev_nvme_attach_controller" 00:34:14.528 } 00:34:14.528 EOF 00:34:14.528 )") 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:14.528 "params": { 00:34:14.528 "name": "Nvme0", 00:34:14.528 "trtype": "tcp", 00:34:14.528 "traddr": "10.0.0.2", 00:34:14.528 "adrfam": "ipv4", 00:34:14.528 "trsvcid": "4420", 00:34:14.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:14.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:14.528 "hdgst": false, 00:34:14.528 "ddgst": false 00:34:14.528 }, 00:34:14.528 "method": "bdev_nvme_attach_controller" 00:34:14.528 }' 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:14.528 04:21:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.788 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:14.788 ... 00:34:14.788 fio-3.35 00:34:14.788 Starting 3 threads 00:34:21.358 00:34:21.358 filename0: (groupid=0, jobs=1): err= 0: pid=314598: Tue Dec 10 04:21:19 2024 00:34:21.358 read: IOPS=301, BW=37.6MiB/s (39.5MB/s)(190MiB/5047msec) 00:34:21.358 slat (nsec): min=6251, max=25907, avg=10862.22, stdev=1869.41 00:34:21.358 clat (usec): min=3532, max=52822, avg=9918.88, stdev=6636.22 00:34:21.358 lat (usec): min=3539, max=52830, avg=9929.75, stdev=6636.24 00:34:21.358 clat percentiles (usec): 00:34:21.358 | 1.00th=[ 4686], 5.00th=[ 6390], 10.00th=[ 7177], 20.00th=[ 7963], 00:34:21.358 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:34:21.358 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10683], 95.00th=[11338], 00:34:21.358 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51119], 99.95th=[52691], 00:34:21.358 | 99.99th=[52691] 00:34:21.358 bw ( KiB/s): min=28672, max=47872, per=32.90%, avg=38835.20, stdev=5218.68, samples=10 00:34:21.358 iops : min= 224, max= 374, avg=303.40, stdev=40.77, samples=10 00:34:21.359 lat (msec) : 4=0.86%, 10=78.95%, 20=17.50%, 50=2.04%, 100=0.66% 00:34:21.359 cpu : usr=94.45%, sys=5.27%, ctx=7, majf=0, minf=48 00:34:21.359 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.359 issued rwts: total=1520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.359 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:21.359 filename0: (groupid=0, jobs=1): err= 0: pid=314599: Tue Dec 10 04:21:19 2024 00:34:21.359 read: IOPS=309, BW=38.7MiB/s (40.6MB/s)(195MiB/5043msec) 00:34:21.359 slat (nsec): min=6175, max=26938, avg=10676.81, stdev=2010.49 00:34:21.359 clat (usec): min=4139, max=51555, avg=9652.58, stdev=4688.50 00:34:21.359 lat (usec): min=4145, max=51561, avg=9663.25, stdev=4688.48 00:34:21.359 clat percentiles (usec): 00:34:21.359 | 1.00th=[ 5276], 5.00th=[ 6128], 10.00th=[ 6587], 20.00th=[ 7767], 00:34:21.359 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:34:21.359 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11207], 95.00th=[11731], 00:34:21.359 | 99.00th=[46924], 99.50th=[48497], 99.90th=[50070], 99.95th=[51643], 00:34:21.359 | 99.99th=[51643] 00:34:21.359 bw ( KiB/s): min=29952, max=45824, per=33.81%, avg=39910.40, stdev=4248.62, samples=10 00:34:21.359 iops : min= 234, max= 358, avg=311.80, stdev=33.19, samples=10 00:34:21.359 lat (msec) : 10=64.51%, 20=34.21%, 50=1.15%, 100=0.13% 00:34:21.359 cpu : usr=94.23%, sys=5.49%, ctx=7, majf=0, minf=54 00:34:21.359 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.359 issued rwts: total=1561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.359 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:21.359 filename0: (groupid=0, jobs=1): err= 0: pid=314600: Tue Dec 10 04:21:19 2024 00:34:21.359 read: IOPS=311, BW=39.0MiB/s (40.9MB/s)(197MiB/5044msec) 00:34:21.359 slat (nsec): min=6161, max=25841, avg=10804.09, stdev=2137.22 00:34:21.359 clat (usec): min=3712, max=51334, avg=9580.32, stdev=4707.10 00:34:21.359 lat (usec): min=3724, max=51347, avg=9591.12, stdev=4707.15 00:34:21.359 clat percentiles (usec): 00:34:21.359 | 1.00th=[ 4113], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7701], 00:34:21.359 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:34:21.359 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11207], 95.00th=[11863], 00:34:21.359 | 99.00th=[46400], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:34:21.359 | 99.99th=[51119] 00:34:21.359 bw ( KiB/s): min=34816, max=48128, per=34.07%, avg=40217.60, stdev=4102.13, samples=10 00:34:21.359 iops : min= 272, max= 376, avg=314.20, stdev=32.05, samples=10 00:34:21.359 lat (msec) : 4=0.64%, 10=66.62%, 20=31.47%, 50=0.95%, 100=0.32% 00:34:21.359 cpu : usr=94.53%, sys=5.20%, ctx=9, majf=0, minf=41 00:34:21.359 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.359 issued rwts: total=1573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.359 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:21.359 00:34:21.359 Run status group 0 (all jobs): 00:34:21.359 READ: bw=115MiB/s (121MB/s), 37.6MiB/s-39.0MiB/s (39.5MB/s-40.9MB/s), io=582MiB (610MB), run=5043-5047msec 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 bdev_null0 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 [2024-12-10 04:21:19.724203] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 bdev_null1 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 bdev_null2 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.359 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:21.360 { 00:34:21.360 "params": { 00:34:21.360 "name": "Nvme$subsystem", 00:34:21.360 "trtype": "$TEST_TRANSPORT", 00:34:21.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.360 "adrfam": "ipv4", 00:34:21.360 "trsvcid": "$NVMF_PORT", 00:34:21.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.360 "hdgst": ${hdgst:-false}, 00:34:21.360 "ddgst": ${ddgst:-false} 00:34:21.360 }, 00:34:21.360 "method": "bdev_nvme_attach_controller" 00:34:21.360 } 00:34:21.360 EOF 00:34:21.360 )") 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:21.360 { 00:34:21.360 "params": { 00:34:21.360 "name": "Nvme$subsystem", 00:34:21.360 "trtype": "$TEST_TRANSPORT", 00:34:21.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.360 "adrfam": "ipv4", 00:34:21.360 "trsvcid": "$NVMF_PORT", 00:34:21.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.360 "hdgst": ${hdgst:-false}, 00:34:21.360 "ddgst": ${ddgst:-false} 00:34:21.360 }, 00:34:21.360 "method": "bdev_nvme_attach_controller" 00:34:21.360 } 00:34:21.360 EOF 00:34:21.360 )") 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:21.360 { 00:34:21.360 "params": { 00:34:21.360 "name": "Nvme$subsystem", 00:34:21.360 "trtype": "$TEST_TRANSPORT", 00:34:21.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.360 "adrfam": "ipv4", 00:34:21.360 "trsvcid": "$NVMF_PORT", 00:34:21.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.360 "hdgst": ${hdgst:-false}, 00:34:21.360 "ddgst": ${ddgst:-false} 00:34:21.360 }, 00:34:21.360 "method": "bdev_nvme_attach_controller" 00:34:21.360 } 00:34:21.360 EOF 00:34:21.360 )") 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:21.360 "params": { 00:34:21.360 "name": "Nvme0", 00:34:21.360 "trtype": "tcp", 00:34:21.360 "traddr": "10.0.0.2", 00:34:21.360 "adrfam": "ipv4", 00:34:21.360 "trsvcid": "4420", 00:34:21.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:21.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:21.360 "hdgst": false, 00:34:21.360 "ddgst": false 00:34:21.360 }, 00:34:21.360 "method": "bdev_nvme_attach_controller" 00:34:21.360 },{ 00:34:21.360 "params": { 00:34:21.360 "name": "Nvme1", 00:34:21.360 "trtype": "tcp", 00:34:21.360 "traddr": "10.0.0.2", 00:34:21.360 "adrfam": "ipv4", 00:34:21.360 "trsvcid": "4420", 00:34:21.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:21.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:21.360 "hdgst": false, 00:34:21.360 "ddgst": false 00:34:21.360 }, 00:34:21.360 "method": "bdev_nvme_attach_controller" 00:34:21.360 },{ 00:34:21.360 "params": { 00:34:21.360 "name": "Nvme2", 00:34:21.360 "trtype": "tcp", 00:34:21.360 "traddr": "10.0.0.2", 00:34:21.360 "adrfam": "ipv4", 00:34:21.360 "trsvcid": "4420", 00:34:21.360 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:21.360 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:21.360 "hdgst": false, 00:34:21.360 "ddgst": false 00:34:21.360 }, 00:34:21.360 "method": "bdev_nvme_attach_controller" 00:34:21.360 }' 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:21.360 04:21:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.360 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:21.360 ... 00:34:21.360 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:21.360 ... 00:34:21.360 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:21.360 ... 00:34:21.360 fio-3.35 00:34:21.360 Starting 24 threads 00:34:33.559 00:34:33.559 filename0: (groupid=0, jobs=1): err= 0: pid=315742: Tue Dec 10 04:21:30 2024 00:34:33.559 read: IOPS=527, BW=2111KiB/s (2162kB/s)(20.6MiB/10004msec) 00:34:33.559 slat (nsec): min=7478, max=56274, avg=11760.55, stdev=5675.82 00:34:33.559 clat (usec): min=5250, max=32271, avg=30209.05, stdev=2236.27 00:34:33.559 lat (usec): min=5259, max=32287, avg=30220.81, stdev=2235.40 00:34:33.559 clat percentiles (usec): 00:34:33.559 | 1.00th=[16909], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:34:33.559 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:34:33.559 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:34:33.559 | 99.00th=[31589], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:34:33.559 | 99.99th=[32375] 00:34:33.559 bw ( KiB/s): min= 2048, max= 2432, per=4.17%, avg=2108.63, stdev=98.86, samples=19 00:34:33.559 iops : min= 512, max= 608, avg=527.16, stdev=24.71, samples=19 00:34:33.559 lat (msec) : 10=0.44%, 20=1.08%, 50=98.48% 00:34:33.559 cpu : usr=98.69%, sys=0.91%, ctx=10, majf=0, minf=9 00:34:33.559 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:33.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.559 filename0: (groupid=0, jobs=1): err= 0: pid=315743: Tue Dec 10 04:21:30 2024 00:34:33.559 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10004msec) 00:34:33.559 slat (nsec): min=6576, max=69374, avg=23887.21, stdev=11349.72 00:34:33.559 clat (usec): min=18707, max=45013, avg=30359.70, stdev=1085.14 00:34:33.559 lat (usec): min=18721, max=45026, avg=30383.59, stdev=1084.97 00:34:33.559 clat percentiles (usec): 00:34:33.559 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.559 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.559 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:34:33.559 | 99.00th=[31589], 99.50th=[31851], 99.90th=[44827], 99.95th=[44827], 00:34:33.559 | 99.99th=[44827] 00:34:33.559 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2088.42, stdev=74.55, samples=19 00:34:33.559 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:34:33.559 lat (msec) : 20=0.27%, 50=99.73% 00:34:33.559 cpu : usr=98.50%, sys=1.11%, ctx=14, majf=0, minf=9 00:34:33.559 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:33.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.559 filename0: (groupid=0, jobs=1): err= 0: pid=315744: Tue Dec 10 04:21:30 2024 00:34:33.559 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10017msec) 00:34:33.559 slat (nsec): min=5548, max=66351, avg=21837.46, stdev=10074.40 00:34:33.559 clat (usec): min=11088, max=37542, avg=30166.55, stdev=1840.57 00:34:33.559 lat (usec): min=11096, max=37564, avg=30188.39, stdev=1841.61 00:34:33.559 clat percentiles (usec): 00:34:33.559 | 1.00th=[17695], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.559 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.559 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:34:33.559 | 99.00th=[31589], 99.50th=[31851], 99.90th=[36439], 99.95th=[36439], 00:34:33.559 | 99.99th=[37487] 00:34:33.559 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2108.63, stdev=78.31, samples=19 00:34:33.559 iops : min= 512, max= 576, avg=527.16, stdev=19.58, samples=19 00:34:33.559 lat (msec) : 20=1.14%, 50=98.86% 00:34:33.559 cpu : usr=98.50%, sys=1.11%, ctx=14, majf=0, minf=9 00:34:33.559 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:33.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.559 filename0: (groupid=0, jobs=1): err= 0: pid=315745: Tue Dec 10 04:21:30 2024 00:34:33.559 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10015msec) 00:34:33.559 slat (nsec): min=7567, max=66643, avg=20325.99, stdev=10126.14 00:34:33.559 clat (usec): min=11220, max=32115, avg=30271.19, stdev=1420.94 00:34:33.559 lat (usec): min=11228, max=32130, avg=30291.51, stdev=1421.02 00:34:33.559 clat percentiles (usec): 00:34:33.559 | 1.00th=[26346], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.559 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.559 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:34:33.559 | 99.00th=[31589], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:34:33.559 | 99.99th=[32113] 00:34:33.559 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2101.89, stdev=64.93, samples=19 00:34:33.559 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:34:33.559 lat (msec) : 20=0.61%, 50=99.39% 00:34:33.559 cpu : usr=98.70%, sys=0.92%, ctx=14, majf=0, minf=11 00:34:33.559 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:33.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.559 filename0: (groupid=0, jobs=1): err= 0: pid=315746: Tue Dec 10 04:21:30 2024 00:34:33.559 read: IOPS=523, BW=2092KiB/s (2142kB/s)(20.4MiB/10003msec) 00:34:33.559 slat (nsec): min=4617, max=90045, avg=31929.79, stdev=13841.50 00:34:33.559 clat (usec): min=17780, max=51647, avg=30289.59, stdev=1567.89 00:34:33.559 lat (usec): min=17804, max=51660, avg=30321.52, stdev=1567.01 00:34:33.559 clat percentiles (usec): 00:34:33.559 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:33.559 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:33.559 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:33.559 | 99.00th=[31589], 99.50th=[32113], 99.90th=[51643], 99.95th=[51643], 00:34:33.559 | 99.99th=[51643] 00:34:33.559 bw ( KiB/s): min= 1923, max= 2176, per=4.13%, avg=2088.58, stdev=74.17, samples=19 00:34:33.559 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:34:33.559 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:34:33.559 cpu : usr=98.54%, sys=1.08%, ctx=16, majf=0, minf=9 00:34:33.559 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:33.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.559 filename0: (groupid=0, jobs=1): err= 0: pid=315747: Tue Dec 10 04:21:30 2024 00:34:33.559 read: IOPS=557, BW=2230KiB/s (2284kB/s)(21.8MiB/10005msec) 00:34:33.559 slat (nsec): min=6572, max=86092, avg=20546.08, stdev=14503.36 00:34:33.559 clat (usec): min=4731, max=55567, avg=28566.71, stdev=5305.56 00:34:33.559 lat (usec): min=4751, max=55590, avg=28587.25, stdev=5308.09 00:34:33.559 clat percentiles (usec): 00:34:33.559 | 1.00th=[16450], 5.00th=[19530], 10.00th=[20317], 20.00th=[23987], 00:34:33.559 | 30.00th=[26870], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:33.559 | 70.00th=[30278], 80.00th=[30540], 90.00th=[31327], 95.00th=[35390], 00:34:33.559 | 99.00th=[45876], 99.50th=[47973], 99.90th=[55313], 99.95th=[55313], 00:34:33.559 | 99.99th=[55313] 00:34:33.559 bw ( KiB/s): min= 1907, max= 2560, per=4.40%, avg=2224.95, stdev=163.48, samples=20 00:34:33.559 iops : min= 476, max= 640, avg=556.20, stdev=40.95, samples=20 00:34:33.559 lat (msec) : 10=0.29%, 20=6.79%, 50=92.45%, 100=0.47% 00:34:33.559 cpu : usr=98.56%, sys=1.03%, ctx=15, majf=0, minf=9 00:34:33.559 IO depths : 1=0.3%, 2=3.0%, 4=12.7%, 8=70.3%, 16=13.6%, 32=0.0%, >=64=0.0% 00:34:33.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 complete : 0=0.0%, 4=91.2%, 8=4.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.559 issued rwts: total=5578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.559 filename0: (groupid=0, jobs=1): err= 0: pid=315748: Tue Dec 10 04:21:30 2024 00:34:33.559 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10012msec) 00:34:33.559 slat (nsec): min=7562, max=89561, avg=30552.26, stdev=12871.93 00:34:33.559 clat (usec): min=17923, max=31973, avg=30273.85, stdev=793.51 00:34:33.559 lat (usec): min=17952, max=31995, avg=30304.40, stdev=792.83 00:34:33.559 clat percentiles (usec): 00:34:33.559 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.560 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.560 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:33.560 | 99.00th=[31589], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:34:33.560 | 99.99th=[31851] 00:34:33.560 bw ( KiB/s): min= 2048, max= 2176, per=4.14%, avg=2092.80, stdev=62.64, samples=20 00:34:33.560 iops : min= 512, max= 544, avg=523.20, stdev=15.66, samples=20 00:34:33.560 lat (msec) : 20=0.30%, 50=99.70% 00:34:33.560 cpu : usr=98.49%, sys=1.11%, ctx=14, majf=0, minf=9 00:34:33.560 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:33.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.560 filename0: (groupid=0, jobs=1): err= 0: pid=315749: Tue Dec 10 04:21:30 2024 00:34:33.560 read: IOPS=523, BW=2095KiB/s (2145kB/s)(20.5MiB/10001msec) 00:34:33.560 slat (nsec): min=5373, max=67176, avg=22448.14, stdev=11924.25 00:34:33.560 clat (usec): min=18726, max=44819, avg=30322.51, stdev=1371.89 00:34:33.560 lat (usec): min=18734, max=44832, avg=30344.95, stdev=1372.13 00:34:33.560 clat percentiles (usec): 00:34:33.560 | 1.00th=[27395], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.560 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.560 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:34:33.560 | 99.00th=[31589], 99.50th=[38536], 99.90th=[44827], 99.95th=[44827], 00:34:33.560 | 99.99th=[44827] 00:34:33.560 bw ( KiB/s): min= 1968, max= 2176, per=4.13%, avg=2090.95, stdev=69.14, samples=19 00:34:33.560 iops : min= 492, max= 544, avg=522.74, stdev=17.28, samples=19 00:34:33.560 lat (msec) : 20=0.42%, 50=99.58% 00:34:33.560 cpu : usr=98.54%, sys=1.07%, ctx=14, majf=0, minf=9 00:34:33.560 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:33.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 issued rwts: total=5238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.560 filename1: (groupid=0, jobs=1): err= 0: pid=315750: Tue Dec 10 04:21:30 2024 00:34:33.560 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10015msec) 00:34:33.560 slat (nsec): min=7638, max=67446, avg=21485.51, stdev=9638.48 00:34:33.560 clat (usec): min=11305, max=31992, avg=30256.88, stdev=1416.41 00:34:33.560 lat (usec): min=11318, max=32005, avg=30278.37, stdev=1416.91 00:34:33.560 clat percentiles (usec): 00:34:33.560 | 1.00th=[26346], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.560 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.560 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:34:33.560 | 99.00th=[31589], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:34:33.560 | 99.99th=[32113] 00:34:33.560 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2101.89, stdev=64.93, samples=19 00:34:33.560 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:34:33.560 lat (msec) : 20=0.61%, 50=99.39% 00:34:33.560 cpu : usr=98.56%, sys=1.05%, ctx=19, majf=0, minf=9 00:34:33.560 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:33.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.560 filename1: (groupid=0, jobs=1): err= 0: pid=315751: Tue Dec 10 04:21:30 2024 00:34:33.560 read: IOPS=526, BW=2105KiB/s (2156kB/s)(20.6MiB/10002msec) 00:34:33.560 slat (nsec): min=5023, max=67314, avg=20431.42, stdev=11541.40 00:34:33.560 clat (usec): min=8672, max=36129, avg=30241.22, stdev=1694.03 00:34:33.560 lat (usec): min=8681, max=36143, avg=30261.65, stdev=1694.14 00:34:33.560 clat percentiles (usec): 00:34:33.560 | 1.00th=[20579], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:34:33.560 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.560 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:34:33.560 | 99.00th=[31589], 99.50th=[31589], 99.90th=[32375], 99.95th=[35390], 00:34:33.560 | 99.99th=[35914] 00:34:33.560 bw ( KiB/s): min= 2048, max= 2308, per=4.16%, avg=2102.11, stdev=78.27, samples=19 00:34:33.560 iops : min= 512, max= 577, avg=525.53, stdev=19.57, samples=19 00:34:33.560 lat (msec) : 10=0.04%, 20=0.84%, 50=99.13% 00:34:33.560 cpu : usr=98.76%, sys=0.85%, ctx=14, majf=0, minf=9 00:34:33.560 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:33.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.560 filename1: (groupid=0, jobs=1): err= 0: pid=315752: Tue Dec 10 04:21:30 2024 00:34:33.560 read: IOPS=523, BW=2092KiB/s (2142kB/s)(20.4MiB/10003msec) 00:34:33.560 slat (nsec): min=5806, max=88541, avg=32175.64, stdev=13442.19 00:34:33.560 clat (usec): min=17782, max=51313, avg=30290.40, stdev=1408.33 00:34:33.560 lat (usec): min=17797, max=51329, avg=30322.58, stdev=1407.13 00:34:33.560 clat percentiles (usec): 00:34:33.560 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:33.560 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:33.560 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:33.560 | 99.00th=[31589], 99.50th=[31851], 99.90th=[51119], 99.95th=[51119], 00:34:33.560 | 99.99th=[51119] 00:34:33.560 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2088.42, stdev=74.55, samples=19 00:34:33.560 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:34:33.560 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:34:33.560 cpu : usr=98.61%, sys=1.00%, ctx=13, majf=0, minf=9 00:34:33.560 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:33.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.560 filename1: (groupid=0, jobs=1): err= 0: pid=315753: Tue Dec 10 04:21:30 2024 00:34:33.560 read: IOPS=523, BW=2092KiB/s (2142kB/s)(20.4MiB/10003msec) 00:34:33.560 slat (nsec): min=5840, max=89779, avg=30666.59, stdev=14717.34 00:34:33.560 clat (usec): min=17893, max=51180, avg=30285.20, stdev=1402.04 00:34:33.560 lat (usec): min=17902, max=51196, avg=30315.87, stdev=1401.48 00:34:33.560 clat percentiles (usec): 00:34:33.560 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:33.560 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:33.560 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:33.560 | 99.00th=[31589], 99.50th=[31851], 99.90th=[51119], 99.95th=[51119], 00:34:33.560 | 99.99th=[51119] 00:34:33.560 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2088.42, stdev=74.55, samples=19 00:34:33.560 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:34:33.560 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:34:33.560 cpu : usr=98.49%, sys=1.12%, ctx=14, majf=0, minf=9 00:34:33.560 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:33.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.560 filename1: (groupid=0, jobs=1): err= 0: pid=315754: Tue Dec 10 04:21:30 2024 00:34:33.560 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10012msec) 00:34:33.560 slat (nsec): min=4471, max=90268, avg=31627.27, stdev=12504.56 00:34:33.560 clat (usec): min=17733, max=37268, avg=30263.04, stdev=816.02 00:34:33.560 lat (usec): min=17750, max=37282, avg=30294.66, stdev=814.90 00:34:33.560 clat percentiles (usec): 00:34:33.560 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.560 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.560 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:33.560 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[32113], 00:34:33.560 | 99.99th=[37487] 00:34:33.560 bw ( KiB/s): min= 2048, max= 2176, per=4.14%, avg=2092.80, stdev=62.64, samples=20 00:34:33.560 iops : min= 512, max= 544, avg=523.20, stdev=15.66, samples=20 00:34:33.560 lat (msec) : 20=0.30%, 50=99.70% 00:34:33.560 cpu : usr=98.53%, sys=1.08%, ctx=14, majf=0, minf=9 00:34:33.560 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:33.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.560 filename1: (groupid=0, jobs=1): err= 0: pid=315755: Tue Dec 10 04:21:30 2024 00:34:33.560 read: IOPS=523, BW=2092KiB/s (2142kB/s)(20.4MiB/10003msec) 00:34:33.560 slat (nsec): min=5637, max=87303, avg=29797.91, stdev=11308.10 00:34:33.560 clat (usec): min=17798, max=51004, avg=30333.82, stdev=1396.87 00:34:33.560 lat (usec): min=17830, max=51021, avg=30363.61, stdev=1395.54 00:34:33.560 clat percentiles (usec): 00:34:33.560 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.560 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.560 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:33.560 | 99.00th=[31589], 99.50th=[31851], 99.90th=[51119], 99.95th=[51119], 00:34:33.560 | 99.99th=[51119] 00:34:33.560 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2088.42, stdev=74.55, samples=19 00:34:33.560 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:34:33.560 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:34:33.560 cpu : usr=98.59%, sys=1.02%, ctx=14, majf=0, minf=9 00:34:33.560 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:33.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.560 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.561 filename1: (groupid=0, jobs=1): err= 0: pid=315756: Tue Dec 10 04:21:30 2024 00:34:33.561 read: IOPS=528, BW=2115KiB/s (2165kB/s)(20.7MiB/10018msec) 00:34:33.561 slat (nsec): min=7458, max=64644, avg=17223.92, stdev=8556.00 00:34:33.561 clat (usec): min=5209, max=32064, avg=30122.50, stdev=2368.24 00:34:33.561 lat (usec): min=5221, max=32076, avg=30139.72, stdev=2368.08 00:34:33.561 clat percentiles (usec): 00:34:33.561 | 1.00th=[12780], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.561 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:34:33.561 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:34:33.561 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:34:33.561 | 99.99th=[32113] 00:34:33.561 bw ( KiB/s): min= 2048, max= 2432, per=4.18%, avg=2112.00, stdev=97.39, samples=20 00:34:33.561 iops : min= 512, max= 608, avg=528.00, stdev=24.35, samples=20 00:34:33.561 lat (msec) : 10=0.30%, 20=1.21%, 50=98.49% 00:34:33.561 cpu : usr=98.61%, sys=0.99%, ctx=14, majf=0, minf=9 00:34:33.561 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:33.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.561 filename1: (groupid=0, jobs=1): err= 0: pid=315757: Tue Dec 10 04:21:30 2024 00:34:33.561 read: IOPS=531, BW=2127KiB/s (2179kB/s)(20.8MiB/10010msec) 00:34:33.561 slat (nsec): min=7297, max=80558, avg=15987.19, stdev=10756.59 00:34:33.561 clat (usec): min=11689, max=39476, avg=29951.46, stdev=2189.37 00:34:33.561 lat (usec): min=11698, max=39484, avg=29967.45, stdev=2190.24 00:34:33.561 clat percentiles (usec): 00:34:33.561 | 1.00th=[18744], 5.00th=[26346], 10.00th=[30016], 20.00th=[30016], 00:34:33.561 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.561 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:34:33.561 | 99.00th=[31589], 99.50th=[31589], 99.90th=[37487], 99.95th=[39584], 00:34:33.561 | 99.99th=[39584] 00:34:33.561 bw ( KiB/s): min= 2048, max= 2656, per=4.21%, avg=2127.16, stdev=142.46, samples=19 00:34:33.561 iops : min= 512, max= 664, avg=531.79, stdev=35.61, samples=19 00:34:33.561 lat (msec) : 20=2.44%, 50=97.56% 00:34:33.561 cpu : usr=98.14%, sys=1.46%, ctx=14, majf=0, minf=9 00:34:33.561 IO depths : 1=5.9%, 2=11.8%, 4=24.0%, 8=51.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:33.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 issued rwts: total=5324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.561 filename2: (groupid=0, jobs=1): err= 0: pid=315758: Tue Dec 10 04:21:30 2024 00:34:33.561 read: IOPS=525, BW=2101KiB/s (2152kB/s)(20.5MiB/10002msec) 00:34:33.561 slat (nsec): min=6253, max=87543, avg=29229.76, stdev=13055.48 00:34:33.561 clat (usec): min=18705, max=51103, avg=30223.53, stdev=1420.34 00:34:33.561 lat (usec): min=18721, max=51112, avg=30252.76, stdev=1420.69 00:34:33.561 clat percentiles (usec): 00:34:33.561 | 1.00th=[24773], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:33.561 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.561 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:33.561 | 99.00th=[31327], 99.50th=[31589], 99.90th=[51119], 99.95th=[51119], 00:34:33.561 | 99.99th=[51119] 00:34:33.561 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2097.68, stdev=62.40, samples=19 00:34:33.561 iops : min= 512, max= 544, avg=524.42, stdev=15.60, samples=19 00:34:33.561 lat (msec) : 20=0.76%, 50=99.09%, 100=0.15% 00:34:33.561 cpu : usr=98.71%, sys=0.90%, ctx=14, majf=0, minf=9 00:34:33.561 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:33.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 issued rwts: total=5254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.561 filename2: (groupid=0, jobs=1): err= 0: pid=315759: Tue Dec 10 04:21:30 2024 00:34:33.561 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10018msec) 00:34:33.561 slat (nsec): min=7092, max=64864, avg=21234.19, stdev=10140.84 00:34:33.561 clat (usec): min=4680, max=38136, avg=30056.00, stdev=2499.70 00:34:33.561 lat (usec): min=4694, max=38156, avg=30077.23, stdev=2500.42 00:34:33.561 clat percentiles (usec): 00:34:33.561 | 1.00th=[12649], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.561 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.561 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:34:33.561 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:34:33.561 | 99.99th=[38011] 00:34:33.561 bw ( KiB/s): min= 2048, max= 2480, per=4.18%, avg=2114.40, stdev=105.91, samples=20 00:34:33.561 iops : min= 512, max= 620, avg=528.60, stdev=26.48, samples=20 00:34:33.561 lat (msec) : 10=0.57%, 20=0.98%, 50=98.45% 00:34:33.561 cpu : usr=98.60%, sys=1.00%, ctx=34, majf=0, minf=9 00:34:33.561 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:33.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 issued rwts: total=5302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.561 filename2: (groupid=0, jobs=1): err= 0: pid=315760: Tue Dec 10 04:21:30 2024 00:34:33.561 read: IOPS=531, BW=2126KiB/s (2177kB/s)(20.8MiB/10006msec) 00:34:33.561 slat (nsec): min=6309, max=94164, avg=27484.03, stdev=18201.76 00:34:33.561 clat (usec): min=5736, max=64108, avg=29962.58, stdev=3737.15 00:34:33.561 lat (usec): min=5750, max=64129, avg=29990.07, stdev=3736.71 00:34:33.561 clat percentiles (usec): 00:34:33.561 | 1.00th=[17433], 5.00th=[23200], 10.00th=[26608], 20.00th=[30016], 00:34:33.561 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.561 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[33817], 00:34:33.561 | 99.00th=[44827], 99.50th=[46400], 99.90th=[47973], 99.95th=[47973], 00:34:33.561 | 99.99th=[64226] 00:34:33.561 bw ( KiB/s): min= 1923, max= 2304, per=4.20%, avg=2123.35, stdev=71.32, samples=20 00:34:33.561 iops : min= 480, max= 576, avg=530.80, stdev=17.94, samples=20 00:34:33.561 lat (msec) : 10=0.21%, 20=2.43%, 50=97.33%, 100=0.04% 00:34:33.561 cpu : usr=98.66%, sys=0.82%, ctx=92, majf=0, minf=9 00:34:33.561 IO depths : 1=0.2%, 2=1.5%, 4=6.2%, 8=75.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:34:33.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 complete : 0=0.0%, 4=90.3%, 8=7.8%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 issued rwts: total=5319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.561 filename2: (groupid=0, jobs=1): err= 0: pid=315761: Tue Dec 10 04:21:30 2024 00:34:33.561 read: IOPS=526, BW=2105KiB/s (2156kB/s)(20.6MiB/10001msec) 00:34:33.561 slat (nsec): min=4626, max=67762, avg=24805.78, stdev=13246.91 00:34:33.561 clat (usec): min=11232, max=45726, avg=30210.13, stdev=1866.13 00:34:33.561 lat (usec): min=11240, max=45742, avg=30234.94, stdev=1866.47 00:34:33.561 clat percentiles (usec): 00:34:33.561 | 1.00th=[18744], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.561 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.561 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:33.561 | 99.00th=[31589], 99.50th=[31851], 99.90th=[45876], 99.95th=[45876], 00:34:33.561 | 99.99th=[45876] 00:34:33.561 bw ( KiB/s): min= 2048, max= 2299, per=4.16%, avg=2101.63, stdev=76.98, samples=19 00:34:33.561 iops : min= 512, max= 574, avg=525.37, stdev=19.14, samples=19 00:34:33.561 lat (msec) : 20=1.33%, 50=98.67% 00:34:33.561 cpu : usr=98.72%, sys=0.90%, ctx=14, majf=0, minf=9 00:34:33.561 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:33.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.561 filename2: (groupid=0, jobs=1): err= 0: pid=315762: Tue Dec 10 04:21:30 2024 00:34:33.561 read: IOPS=523, BW=2093KiB/s (2143kB/s)(20.4MiB/10001msec) 00:34:33.561 slat (nsec): min=5569, max=89065, avg=32818.72, stdev=12936.66 00:34:33.561 clat (usec): min=17783, max=48966, avg=30301.85, stdev=1325.87 00:34:33.561 lat (usec): min=17813, max=48982, avg=30334.67, stdev=1324.27 00:34:33.561 clat percentiles (usec): 00:34:33.561 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:33.561 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.561 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:33.561 | 99.00th=[31589], 99.50th=[31851], 99.90th=[49021], 99.95th=[49021], 00:34:33.561 | 99.99th=[49021] 00:34:33.561 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2088.42, stdev=74.55, samples=19 00:34:33.561 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:34:33.561 lat (msec) : 20=0.31%, 50=99.69% 00:34:33.561 cpu : usr=98.54%, sys=1.06%, ctx=14, majf=0, minf=9 00:34:33.561 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:33.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.561 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.561 filename2: (groupid=0, jobs=1): err= 0: pid=315763: Tue Dec 10 04:21:30 2024 00:34:33.561 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10012msec) 00:34:33.561 slat (nsec): min=7621, max=88385, avg=28862.92, stdev=12805.12 00:34:33.561 clat (usec): min=17964, max=32066, avg=30294.54, stdev=794.02 00:34:33.561 lat (usec): min=17981, max=32080, avg=30323.40, stdev=793.26 00:34:33.561 clat percentiles (usec): 00:34:33.561 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:33.561 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.561 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:33.561 | 99.00th=[31589], 99.50th=[31589], 99.90th=[31851], 99.95th=[32113], 00:34:33.561 | 99.99th=[32113] 00:34:33.561 bw ( KiB/s): min= 2048, max= 2176, per=4.14%, avg=2092.80, stdev=62.64, samples=20 00:34:33.561 iops : min= 512, max= 544, avg=523.20, stdev=15.66, samples=20 00:34:33.562 lat (msec) : 20=0.30%, 50=99.70% 00:34:33.562 cpu : usr=98.66%, sys=0.92%, ctx=15, majf=0, minf=9 00:34:33.562 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:33.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.562 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.562 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.562 filename2: (groupid=0, jobs=1): err= 0: pid=315764: Tue Dec 10 04:21:30 2024 00:34:33.562 read: IOPS=528, BW=2114KiB/s (2165kB/s)(20.7MiB/10005msec) 00:34:33.562 slat (nsec): min=6411, max=94159, avg=29526.24, stdev=16123.30 00:34:33.562 clat (usec): min=4748, max=81990, avg=30022.68, stdev=3666.99 00:34:33.562 lat (usec): min=4756, max=82028, avg=30052.21, stdev=3668.29 00:34:33.562 clat percentiles (usec): 00:34:33.562 | 1.00th=[17957], 5.00th=[25822], 10.00th=[29754], 20.00th=[30016], 00:34:33.562 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:33.562 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31327], 00:34:33.562 | 99.00th=[39060], 99.50th=[43254], 99.90th=[68682], 99.95th=[69731], 00:34:33.562 | 99.99th=[82314] 00:34:33.562 bw ( KiB/s): min= 1984, max= 2192, per=4.16%, avg=2105.26, stdev=69.60, samples=19 00:34:33.562 iops : min= 496, max= 548, avg=526.32, stdev=17.40, samples=19 00:34:33.562 lat (msec) : 10=0.34%, 20=1.63%, 50=97.73%, 100=0.30% 00:34:33.562 cpu : usr=98.74%, sys=0.69%, ctx=55, majf=0, minf=9 00:34:33.562 IO depths : 1=4.3%, 2=8.8%, 4=18.6%, 8=58.9%, 16=9.3%, 32=0.0%, >=64=0.0% 00:34:33.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.562 complete : 0=0.0%, 4=92.6%, 8=2.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.562 issued rwts: total=5288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.562 filename2: (groupid=0, jobs=1): err= 0: pid=315765: Tue Dec 10 04:21:30 2024 00:34:33.562 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10006msec) 00:34:33.562 slat (nsec): min=7496, max=82074, avg=29066.19, stdev=13572.31 00:34:33.562 clat (usec): min=4904, max=76844, avg=30251.36, stdev=2349.21 00:34:33.562 lat (usec): min=4932, max=76858, avg=30280.43, stdev=2349.23 00:34:33.562 clat percentiles (usec): 00:34:33.562 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:33.562 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:33.562 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:33.562 | 99.00th=[31589], 99.50th=[32113], 99.90th=[55313], 99.95th=[55313], 00:34:33.562 | 99.99th=[77071] 00:34:33.562 bw ( KiB/s): min= 1923, max= 2304, per=4.14%, avg=2092.95, stdev=85.55, samples=20 00:34:33.562 iops : min= 480, max= 576, avg=523.20, stdev=21.47, samples=20 00:34:33.562 lat (msec) : 10=0.30%, 20=0.34%, 50=99.05%, 100=0.30% 00:34:33.562 cpu : usr=98.52%, sys=1.08%, ctx=14, majf=0, minf=10 00:34:33.562 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:33.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.562 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.562 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:33.562 00:34:33.562 Run status group 0 (all jobs): 00:34:33.562 READ: bw=49.4MiB/s (51.8MB/s), 2092KiB/s-2230KiB/s (2142kB/s-2284kB/s), io=495MiB (519MB), run=10001-10018msec 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 bdev_null0 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 [2024-12-10 04:21:31.369899] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 bdev_null1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:33.562 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:33.563 { 00:34:33.563 "params": { 00:34:33.563 "name": "Nvme$subsystem", 00:34:33.563 "trtype": "$TEST_TRANSPORT", 00:34:33.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.563 "adrfam": "ipv4", 00:34:33.563 "trsvcid": "$NVMF_PORT", 00:34:33.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.563 "hdgst": ${hdgst:-false}, 00:34:33.563 "ddgst": ${ddgst:-false} 00:34:33.563 }, 00:34:33.563 "method": "bdev_nvme_attach_controller" 00:34:33.563 } 00:34:33.563 EOF 00:34:33.563 )") 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:33.563 { 00:34:33.563 "params": { 00:34:33.563 "name": "Nvme$subsystem", 00:34:33.563 "trtype": "$TEST_TRANSPORT", 00:34:33.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.563 "adrfam": "ipv4", 00:34:33.563 "trsvcid": "$NVMF_PORT", 00:34:33.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.563 "hdgst": ${hdgst:-false}, 00:34:33.563 "ddgst": ${ddgst:-false} 00:34:33.563 }, 00:34:33.563 "method": "bdev_nvme_attach_controller" 00:34:33.563 } 00:34:33.563 EOF 00:34:33.563 )") 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:33.563 "params": { 00:34:33.563 "name": "Nvme0", 00:34:33.563 "trtype": "tcp", 00:34:33.563 "traddr": "10.0.0.2", 00:34:33.563 "adrfam": "ipv4", 00:34:33.563 "trsvcid": "4420", 00:34:33.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:33.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:33.563 "hdgst": false, 00:34:33.563 "ddgst": false 00:34:33.563 }, 00:34:33.563 "method": "bdev_nvme_attach_controller" 00:34:33.563 },{ 00:34:33.563 "params": { 00:34:33.563 "name": "Nvme1", 00:34:33.563 "trtype": "tcp", 00:34:33.563 "traddr": "10.0.0.2", 00:34:33.563 "adrfam": "ipv4", 00:34:33.563 "trsvcid": "4420", 00:34:33.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:33.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:33.563 "hdgst": false, 00:34:33.563 "ddgst": false 00:34:33.563 }, 00:34:33.563 "method": "bdev_nvme_attach_controller" 00:34:33.563 }' 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:33.563 04:21:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.563 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:33.563 ... 00:34:33.563 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:33.563 ... 00:34:33.563 fio-3.35 00:34:33.563 Starting 4 threads 00:34:38.838 00:34:38.838 filename0: (groupid=0, jobs=1): err= 0: pid=317657: Tue Dec 10 04:21:37 2024 00:34:38.838 read: IOPS=2786, BW=21.8MiB/s (22.8MB/s)(109MiB/5002msec) 00:34:38.838 slat (nsec): min=6049, max=76672, avg=11565.32, stdev=6686.27 00:34:38.838 clat (usec): min=767, max=5635, avg=2835.37, stdev=432.75 00:34:38.838 lat (usec): min=780, max=5645, avg=2846.94, stdev=433.01 00:34:38.838 clat percentiles (usec): 00:34:38.838 | 1.00th=[ 1631], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2507], 00:34:38.838 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 2966], 00:34:38.838 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3261], 95.00th=[ 3425], 00:34:38.838 | 99.00th=[ 4015], 99.50th=[ 4359], 99.90th=[ 5080], 99.95th=[ 5276], 00:34:38.838 | 99.99th=[ 5604] 00:34:38.838 bw ( KiB/s): min=21600, max=23072, per=26.61%, avg=22405.33, stdev=575.67, samples=9 00:34:38.838 iops : min= 2700, max= 2884, avg=2800.67, stdev=71.96, samples=9 00:34:38.838 lat (usec) : 1000=0.22% 00:34:38.838 lat (msec) : 2=2.53%, 4=96.25%, 10=1.00% 00:34:38.838 cpu : usr=95.90%, sys=3.76%, ctx=8, majf=0, minf=9 00:34:38.838 IO depths : 1=0.5%, 2=8.3%, 4=62.4%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.838 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.838 issued rwts: total=13939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.838 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:38.838 filename0: (groupid=0, jobs=1): err= 0: pid=317658: Tue Dec 10 04:21:37 2024 00:34:38.838 read: IOPS=2527, BW=19.7MiB/s (20.7MB/s)(98.8MiB/5003msec) 00:34:38.838 slat (nsec): min=6014, max=69206, avg=11179.48, stdev=8011.28 00:34:38.838 clat (usec): min=590, max=5601, avg=3131.07, stdev=476.67 00:34:38.838 lat (usec): min=602, max=5615, avg=3142.25, stdev=476.48 00:34:38.838 clat percentiles (usec): 00:34:38.838 | 1.00th=[ 2114], 5.00th=[ 2474], 10.00th=[ 2704], 20.00th=[ 2868], 00:34:38.838 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:34:38.838 | 70.00th=[ 3228], 80.00th=[ 3326], 90.00th=[ 3687], 95.00th=[ 4080], 00:34:38.838 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5407], 99.95th=[ 5473], 00:34:38.838 | 99.99th=[ 5604] 00:34:38.838 bw ( KiB/s): min=19424, max=20976, per=24.03%, avg=20233.67, stdev=505.23, samples=9 00:34:38.838 iops : min= 2428, max= 2622, avg=2529.11, stdev=63.21, samples=9 00:34:38.838 lat (usec) : 750=0.01%, 1000=0.02% 00:34:38.838 lat (msec) : 2=0.62%, 4=93.79%, 10=5.56% 00:34:38.838 cpu : usr=96.68%, sys=2.98%, ctx=7, majf=0, minf=9 00:34:38.839 IO depths : 1=0.1%, 2=3.2%, 4=69.1%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.839 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.839 issued rwts: total=12647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.839 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:38.839 filename1: (groupid=0, jobs=1): err= 0: pid=317659: Tue Dec 10 04:21:37 2024 00:34:38.839 read: IOPS=2569, BW=20.1MiB/s (21.1MB/s)(100MiB/5004msec) 00:34:38.839 slat (nsec): min=6000, max=69360, avg=11296.20, stdev=7940.14 00:34:38.839 clat (usec): min=666, max=5622, avg=3079.37, stdev=489.92 00:34:38.839 lat (usec): min=673, max=5650, avg=3090.66, stdev=489.70 00:34:38.839 clat percentiles (usec): 00:34:38.839 | 1.00th=[ 2024], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2802], 00:34:38.839 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:34:38.839 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3621], 95.00th=[ 4015], 00:34:38.839 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5342], 99.95th=[ 5473], 00:34:38.839 | 99.99th=[ 5604] 00:34:38.839 bw ( KiB/s): min=20240, max=21072, per=24.43%, avg=20566.40, stdev=280.82, samples=10 00:34:38.839 iops : min= 2530, max= 2634, avg=2570.80, stdev=35.10, samples=10 00:34:38.839 lat (usec) : 750=0.03%, 1000=0.03% 00:34:38.839 lat (msec) : 2=0.74%, 4=94.03%, 10=5.17% 00:34:38.839 cpu : usr=96.02%, sys=3.66%, ctx=6, majf=0, minf=9 00:34:38.839 IO depths : 1=0.1%, 2=4.8%, 4=67.0%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.839 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.839 issued rwts: total=12859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.839 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:38.839 filename1: (groupid=0, jobs=1): err= 0: pid=317660: Tue Dec 10 04:21:37 2024 00:34:38.839 read: IOPS=2643, BW=20.7MiB/s (21.7MB/s)(103MiB/5001msec) 00:34:38.839 slat (nsec): min=6001, max=69434, avg=11274.85, stdev=7812.50 00:34:38.839 clat (usec): min=604, max=5515, avg=2991.82, stdev=446.71 00:34:38.839 lat (usec): min=626, max=5529, avg=3003.10, stdev=446.82 00:34:38.839 clat percentiles (usec): 00:34:38.839 | 1.00th=[ 1958], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2704], 00:34:38.839 | 30.00th=[ 2835], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:34:38.839 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3490], 95.00th=[ 3752], 00:34:38.839 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 5211], 99.95th=[ 5211], 00:34:38.839 | 99.99th=[ 5407] 00:34:38.839 bw ( KiB/s): min=20656, max=21584, per=25.12%, avg=21152.00, stdev=294.05, samples=9 00:34:38.839 iops : min= 2582, max= 2698, avg=2644.00, stdev=36.76, samples=9 00:34:38.839 lat (usec) : 750=0.02%, 1000=0.01% 00:34:38.839 lat (msec) : 2=1.31%, 4=95.86%, 10=2.80% 00:34:38.839 cpu : usr=96.32%, sys=3.34%, ctx=7, majf=0, minf=9 00:34:38.839 IO depths : 1=0.3%, 2=5.5%, 4=65.4%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.839 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.839 issued rwts: total=13220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.839 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:38.839 00:34:38.839 Run status group 0 (all jobs): 00:34:38.839 READ: bw=82.2MiB/s (86.2MB/s), 19.7MiB/s-21.8MiB/s (20.7MB/s-22.8MB/s), io=411MiB (431MB), run=5001-5004msec 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.839 00:34:38.839 real 0m24.192s 00:34:38.839 user 4m51.993s 00:34:38.839 sys 0m4.995s 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:38.839 04:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:38.839 ************************************ 00:34:38.839 END TEST fio_dif_rand_params 00:34:38.839 ************************************ 00:34:38.839 04:21:37 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:38.839 04:21:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:38.839 04:21:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:38.839 04:21:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:38.839 ************************************ 00:34:38.839 START TEST fio_dif_digest 00:34:38.839 ************************************ 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:38.839 bdev_null0 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:38.839 [2024-12-10 04:21:37.835669] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:38.839 { 00:34:38.839 "params": { 00:34:38.839 "name": "Nvme$subsystem", 00:34:38.839 "trtype": "$TEST_TRANSPORT", 00:34:38.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:38.839 "adrfam": "ipv4", 00:34:38.839 "trsvcid": "$NVMF_PORT", 00:34:38.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:38.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:38.839 "hdgst": ${hdgst:-false}, 00:34:38.839 "ddgst": ${ddgst:-false} 00:34:38.839 }, 00:34:38.839 "method": "bdev_nvme_attach_controller" 00:34:38.839 } 00:34:38.839 EOF 00:34:38.839 )") 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:38.839 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:38.840 "params": { 00:34:38.840 "name": "Nvme0", 00:34:38.840 "trtype": "tcp", 00:34:38.840 "traddr": "10.0.0.2", 00:34:38.840 "adrfam": "ipv4", 00:34:38.840 "trsvcid": "4420", 00:34:38.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:38.840 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:38.840 "hdgst": true, 00:34:38.840 "ddgst": true 00:34:38.840 }, 00:34:38.840 "method": "bdev_nvme_attach_controller" 00:34:38.840 }' 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:38.840 04:21:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:39.109 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:39.109 ... 00:34:39.109 fio-3.35 00:34:39.109 Starting 3 threads 00:34:51.323 00:34:51.323 filename0: (groupid=0, jobs=1): err= 0: pid=318715: Tue Dec 10 04:21:48 2024 00:34:51.323 read: IOPS=284, BW=35.5MiB/s (37.2MB/s)(357MiB/10045msec) 00:34:51.323 slat (nsec): min=6324, max=25759, avg=11461.23, stdev=1937.31 00:34:51.323 clat (usec): min=7975, max=48795, avg=10531.17, stdev=1246.94 00:34:51.323 lat (usec): min=7987, max=48807, avg=10542.63, stdev=1246.90 00:34:51.323 clat percentiles (usec): 00:34:51.323 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:34:51.323 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:34:51.323 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:34:51.323 | 99.00th=[12518], 99.50th=[12780], 99.90th=[14091], 99.95th=[46924], 00:34:51.323 | 99.99th=[49021] 00:34:51.323 bw ( KiB/s): min=35328, max=37376, per=34.65%, avg=36505.60, stdev=650.82, samples=20 00:34:51.323 iops : min= 276, max= 292, avg=285.20, stdev= 5.08, samples=20 00:34:51.323 lat (msec) : 10=24.11%, 20=75.82%, 50=0.07% 00:34:51.323 cpu : usr=94.54%, sys=5.16%, ctx=19, majf=0, minf=25 00:34:51.323 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.323 issued rwts: total=2854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.323 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:51.323 filename0: (groupid=0, jobs=1): err= 0: pid=318716: Tue Dec 10 04:21:48 2024 00:34:51.323 read: IOPS=276, BW=34.6MiB/s (36.3MB/s)(346MiB/10003msec) 00:34:51.323 slat (nsec): min=6353, max=28298, avg=11271.88, stdev=2077.03 00:34:51.323 clat (usec): min=4934, max=13524, avg=10832.37, stdev=745.57 00:34:51.323 lat (usec): min=4941, max=13535, avg=10843.64, stdev=745.52 00:34:51.323 clat percentiles (usec): 00:34:51.323 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:34:51.323 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:34:51.323 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[11994], 00:34:51.323 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13173], 99.95th=[13304], 00:34:51.323 | 99.99th=[13566] 00:34:51.323 bw ( KiB/s): min=34816, max=36352, per=33.58%, avg=35381.89, stdev=387.11, samples=19 00:34:51.323 iops : min= 272, max= 284, avg=276.42, stdev= 3.02, samples=19 00:34:51.323 lat (msec) : 10=12.94%, 20=87.06% 00:34:51.323 cpu : usr=94.78%, sys=4.86%, ctx=22, majf=0, minf=1 00:34:51.323 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.323 issued rwts: total=2767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.323 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:51.323 filename0: (groupid=0, jobs=1): err= 0: pid=318717: Tue Dec 10 04:21:48 2024 00:34:51.323 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(331MiB/10043msec) 00:34:51.323 slat (nsec): min=6351, max=33854, avg=11107.51, stdev=2215.59 00:34:51.323 clat (usec): min=8446, max=48259, avg=11337.54, stdev=1043.01 00:34:51.323 lat (usec): min=8454, max=48272, avg=11348.65, stdev=1043.06 00:34:51.323 clat percentiles (usec): 00:34:51.323 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:34:51.323 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:34:51.323 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12256], 95.00th=[12649], 00:34:51.323 | 99.00th=[13173], 99.50th=[13304], 99.90th=[14353], 99.95th=[14615], 00:34:51.323 | 99.99th=[48497] 00:34:51.323 bw ( KiB/s): min=32768, max=34816, per=32.15%, avg=33868.80, stdev=485.02, samples=20 00:34:51.323 iops : min= 256, max= 272, avg=264.60, stdev= 3.79, samples=20 00:34:51.323 lat (msec) : 10=3.51%, 20=96.45%, 50=0.04% 00:34:51.323 cpu : usr=95.40%, sys=4.27%, ctx=19, majf=0, minf=21 00:34:51.323 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.323 issued rwts: total=2647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.323 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:51.323 00:34:51.323 Run status group 0 (all jobs): 00:34:51.323 READ: bw=103MiB/s (108MB/s), 32.9MiB/s-35.5MiB/s (34.5MB/s-37.2MB/s), io=1034MiB (1084MB), run=10003-10045msec 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.323 04:21:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:51.324 04:21:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.324 00:34:51.324 real 0m11.117s 00:34:51.324 user 0m35.242s 00:34:51.324 sys 0m1.738s 00:34:51.324 04:21:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:51.324 04:21:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:51.324 ************************************ 00:34:51.324 END TEST fio_dif_digest 00:34:51.324 ************************************ 00:34:51.324 04:21:48 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:51.324 04:21:48 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:51.324 04:21:48 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:51.324 04:21:48 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:51.324 04:21:48 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:51.324 04:21:48 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:51.324 04:21:48 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:51.324 04:21:48 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:51.324 rmmod nvme_tcp 00:34:51.324 rmmod nvme_fabrics 00:34:51.324 rmmod nvme_keyring 00:34:51.324 04:21:48 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:51.324 04:21:49 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:51.324 04:21:49 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:51.324 04:21:49 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 309999 ']' 00:34:51.324 04:21:49 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 309999 00:34:51.324 04:21:49 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 309999 ']' 00:34:51.324 04:21:49 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 309999 00:34:51.324 04:21:49 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:51.324 04:21:49 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:51.324 04:21:49 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 309999 00:34:51.324 04:21:49 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:51.324 04:21:49 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:51.324 04:21:49 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 309999' 00:34:51.324 killing process with pid 309999 00:34:51.324 04:21:49 nvmf_dif -- common/autotest_common.sh@973 -- # kill 309999 00:34:51.324 04:21:49 nvmf_dif -- common/autotest_common.sh@978 -- # wait 309999 00:34:51.324 04:21:49 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:51.324 04:21:49 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:52.703 Waiting for block devices as requested 00:34:52.703 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:52.961 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:52.961 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:52.961 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:53.220 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:53.220 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:53.220 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:53.220 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:53.479 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:53.480 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:53.480 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:53.738 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:53.738 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:53.738 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:53.738 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:53.998 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:53.998 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:53.998 04:21:53 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:53.998 04:21:53 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:53.998 04:21:53 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:53.998 04:21:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:53.998 04:21:53 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:53.998 04:21:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:53.998 04:21:53 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.998 04:21:53 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.998 04:21:53 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.998 04:21:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:53.998 04:21:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.536 04:21:55 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:56.536 00:34:56.536 real 1m13.839s 00:34:56.536 user 7m9.851s 00:34:56.536 sys 0m20.190s 00:34:56.536 04:21:55 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:56.536 04:21:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:56.536 ************************************ 00:34:56.536 END TEST nvmf_dif 00:34:56.536 ************************************ 00:34:56.536 04:21:55 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:56.536 04:21:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:56.536 04:21:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:56.536 04:21:55 -- common/autotest_common.sh@10 -- # set +x 00:34:56.536 ************************************ 00:34:56.536 START TEST nvmf_abort_qd_sizes 00:34:56.536 ************************************ 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:56.536 * Looking for test storage... 00:34:56.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:56.536 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:56.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.537 --rc genhtml_branch_coverage=1 00:34:56.537 --rc genhtml_function_coverage=1 00:34:56.537 --rc genhtml_legend=1 00:34:56.537 --rc geninfo_all_blocks=1 00:34:56.537 --rc geninfo_unexecuted_blocks=1 00:34:56.537 00:34:56.537 ' 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:56.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.537 --rc genhtml_branch_coverage=1 00:34:56.537 --rc genhtml_function_coverage=1 00:34:56.537 --rc genhtml_legend=1 00:34:56.537 --rc geninfo_all_blocks=1 00:34:56.537 --rc geninfo_unexecuted_blocks=1 00:34:56.537 00:34:56.537 ' 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:56.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.537 --rc genhtml_branch_coverage=1 00:34:56.537 --rc genhtml_function_coverage=1 00:34:56.537 --rc genhtml_legend=1 00:34:56.537 --rc geninfo_all_blocks=1 00:34:56.537 --rc geninfo_unexecuted_blocks=1 00:34:56.537 00:34:56.537 ' 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:56.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.537 --rc genhtml_branch_coverage=1 00:34:56.537 --rc genhtml_function_coverage=1 00:34:56.537 --rc genhtml_legend=1 00:34:56.537 --rc geninfo_all_blocks=1 00:34:56.537 --rc geninfo_unexecuted_blocks=1 00:34:56.537 00:34:56.537 ' 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:56.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:56.537 04:21:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:03.112 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:03.113 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:03.113 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:03.113 Found net devices under 0000:af:00.0: cvl_0_0 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:03.113 Found net devices under 0000:af:00.1: cvl_0_1 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:03.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:03.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:35:03.113 00:35:03.113 --- 10.0.0.2 ping statistics --- 00:35:03.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.113 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:03.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:03.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:35:03.113 00:35:03.113 --- 10.0.0.1 ping statistics --- 00:35:03.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.113 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:03.113 04:22:01 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:05.021 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:05.021 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:05.281 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:05.281 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:05.281 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:06.219 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=326557 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 326557 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 326557 ']' 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:06.219 04:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:06.219 [2024-12-10 04:22:05.384155] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:06.219 [2024-12-10 04:22:05.384218] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:06.219 [2024-12-10 04:22:05.463993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:06.478 [2024-12-10 04:22:05.506126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:06.478 [2024-12-10 04:22:05.506162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:06.478 [2024-12-10 04:22:05.506176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:06.478 [2024-12-10 04:22:05.506181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:06.478 [2024-12-10 04:22:05.506186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:06.478 [2024-12-10 04:22:05.507631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.478 [2024-12-10 04:22:05.507669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:06.478 [2024-12-10 04:22:05.507779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.478 [2024-12-10 04:22:05.507780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:07.047 04:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:07.047 ************************************ 00:35:07.047 START TEST spdk_target_abort 00:35:07.047 ************************************ 00:35:07.047 04:22:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:07.047 04:22:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:07.047 04:22:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:07.047 04:22:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.047 04:22:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:10.339 spdk_targetn1 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:10.339 [2024-12-10 04:22:09.126321] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:10.339 [2024-12-10 04:22:09.174610] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:10.339 04:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:13.630 Initializing NVMe Controllers 00:35:13.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:13.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:13.630 Initialization complete. Launching workers. 00:35:13.630 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15953, failed: 0 00:35:13.630 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1393, failed to submit 14560 00:35:13.630 success 735, unsuccessful 658, failed 0 00:35:13.630 04:22:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:13.630 04:22:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:16.922 Initializing NVMe Controllers 00:35:16.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:16.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:16.922 Initialization complete. Launching workers. 00:35:16.922 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8698, failed: 0 00:35:16.922 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 7468 00:35:16.922 success 354, unsuccessful 876, failed 0 00:35:16.922 04:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:16.922 04:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:20.301 Initializing NVMe Controllers 00:35:20.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:20.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:20.301 Initialization complete. Launching workers. 00:35:20.301 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38468, failed: 0 00:35:20.301 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2805, failed to submit 35663 00:35:20.301 success 600, unsuccessful 2205, failed 0 00:35:20.301 04:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:20.301 04:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.301 04:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.302 04:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.302 04:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:20.302 04:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.302 04:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 326557 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 326557 ']' 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 326557 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 326557 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 326557' 00:35:21.237 killing process with pid 326557 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 326557 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 326557 00:35:21.237 00:35:21.237 real 0m14.127s 00:35:21.237 user 0m56.221s 00:35:21.237 sys 0m2.652s 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:21.237 ************************************ 00:35:21.237 END TEST spdk_target_abort 00:35:21.237 ************************************ 00:35:21.237 04:22:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:21.237 04:22:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:21.237 04:22:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:21.237 04:22:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:21.237 ************************************ 00:35:21.237 START TEST kernel_target_abort 00:35:21.237 ************************************ 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:21.237 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:21.238 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:21.238 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:21.238 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:21.496 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:21.496 04:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:24.030 Waiting for block devices as requested 00:35:24.030 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:24.290 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:24.290 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:24.290 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:24.290 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:24.549 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:24.549 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:24.549 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:24.808 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:24.808 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:24.808 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:25.068 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:25.068 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:25.068 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:25.068 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:25.326 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:25.326 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:25.326 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:25.326 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:25.326 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:25.326 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:25.326 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:25.326 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:25.326 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:25.326 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:25.326 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:25.585 No valid GPT data, bailing 00:35:25.585 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:25.585 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:25.585 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:25.585 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:25.585 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:25.585 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:25.585 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:25.585 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:25.585 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:25.585 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:25.585 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:25.586 00:35:25.586 Discovery Log Number of Records 2, Generation counter 2 00:35:25.586 =====Discovery Log Entry 0====== 00:35:25.586 trtype: tcp 00:35:25.586 adrfam: ipv4 00:35:25.586 subtype: current discovery subsystem 00:35:25.586 treq: not specified, sq flow control disable supported 00:35:25.586 portid: 1 00:35:25.586 trsvcid: 4420 00:35:25.586 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:25.586 traddr: 10.0.0.1 00:35:25.586 eflags: none 00:35:25.586 sectype: none 00:35:25.586 =====Discovery Log Entry 1====== 00:35:25.586 trtype: tcp 00:35:25.586 adrfam: ipv4 00:35:25.586 subtype: nvme subsystem 00:35:25.586 treq: not specified, sq flow control disable supported 00:35:25.586 portid: 1 00:35:25.586 trsvcid: 4420 00:35:25.586 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:25.586 traddr: 10.0.0.1 00:35:25.586 eflags: none 00:35:25.586 sectype: none 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:25.586 04:22:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:28.872 Initializing NVMe Controllers 00:35:28.872 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:28.872 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:28.872 Initialization complete. Launching workers. 00:35:28.872 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79503, failed: 0 00:35:28.872 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 79503, failed to submit 0 00:35:28.872 success 0, unsuccessful 79503, failed 0 00:35:28.872 04:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:28.872 04:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:32.159 Initializing NVMe Controllers 00:35:32.159 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:32.159 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:32.159 Initialization complete. Launching workers. 00:35:32.159 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 144332, failed: 0 00:35:32.159 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27870, failed to submit 116462 00:35:32.159 success 0, unsuccessful 27870, failed 0 00:35:32.159 04:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:32.159 04:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:35.445 Initializing NVMe Controllers 00:35:35.445 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:35.445 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:35.445 Initialization complete. Launching workers. 00:35:35.445 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 131072, failed: 0 00:35:35.445 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32778, failed to submit 98294 00:35:35.445 success 0, unsuccessful 32778, failed 0 00:35:35.445 04:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:35.445 04:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:35.445 04:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:35.445 04:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:35.445 04:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:35.445 04:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:35.445 04:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:35.445 04:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:35.445 04:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:35.445 04:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:37.986 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:37.986 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:38.922 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:38.922 00:35:38.922 real 0m17.518s 00:35:38.922 user 0m8.644s 00:35:38.922 sys 0m5.214s 00:35:38.922 04:22:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.922 04:22:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:38.922 ************************************ 00:35:38.922 END TEST kernel_target_abort 00:35:38.922 ************************************ 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:38.922 rmmod nvme_tcp 00:35:38.922 rmmod nvme_fabrics 00:35:38.922 rmmod nvme_keyring 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 326557 ']' 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 326557 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 326557 ']' 00:35:38.922 04:22:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 326557 00:35:38.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (326557) - No such process 00:35:38.923 04:22:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 326557 is not found' 00:35:38.923 Process with pid 326557 is not found 00:35:38.923 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:38.923 04:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:42.213 Waiting for block devices as requested 00:35:42.213 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:42.213 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:42.213 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:42.213 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:42.213 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:42.213 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:42.213 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:42.213 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:42.213 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:42.472 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:42.472 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:42.472 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:42.730 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:42.730 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:42.730 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:42.990 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:42.990 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:42.990 04:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:42.990 04:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:42.990 04:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:42.990 04:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:42.990 04:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:42.990 04:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:42.990 04:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:42.990 04:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:42.990 04:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.990 04:22:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:42.990 04:22:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.526 04:22:44 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:45.526 00:35:45.526 real 0m48.864s 00:35:45.526 user 1m9.358s 00:35:45.526 sys 0m16.544s 00:35:45.526 04:22:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:45.526 04:22:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:45.526 ************************************ 00:35:45.526 END TEST nvmf_abort_qd_sizes 00:35:45.526 ************************************ 00:35:45.526 04:22:44 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:45.526 04:22:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:45.526 04:22:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.526 04:22:44 -- common/autotest_common.sh@10 -- # set +x 00:35:45.526 ************************************ 00:35:45.526 START TEST keyring_file 00:35:45.526 ************************************ 00:35:45.526 04:22:44 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:45.526 * Looking for test storage... 00:35:45.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:45.526 04:22:44 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:45.526 04:22:44 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:35:45.526 04:22:44 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:45.526 04:22:44 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:45.526 04:22:44 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:45.527 04:22:44 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:45.527 04:22:44 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:45.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.527 --rc genhtml_branch_coverage=1 00:35:45.527 --rc genhtml_function_coverage=1 00:35:45.527 --rc genhtml_legend=1 00:35:45.527 --rc geninfo_all_blocks=1 00:35:45.527 --rc geninfo_unexecuted_blocks=1 00:35:45.527 00:35:45.527 ' 00:35:45.527 04:22:44 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:45.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.527 --rc genhtml_branch_coverage=1 00:35:45.527 --rc genhtml_function_coverage=1 00:35:45.527 --rc genhtml_legend=1 00:35:45.527 --rc geninfo_all_blocks=1 00:35:45.527 --rc geninfo_unexecuted_blocks=1 00:35:45.527 00:35:45.527 ' 00:35:45.527 04:22:44 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:45.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.527 --rc genhtml_branch_coverage=1 00:35:45.527 --rc genhtml_function_coverage=1 00:35:45.527 --rc genhtml_legend=1 00:35:45.527 --rc geninfo_all_blocks=1 00:35:45.527 --rc geninfo_unexecuted_blocks=1 00:35:45.527 00:35:45.527 ' 00:35:45.527 04:22:44 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:45.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.527 --rc genhtml_branch_coverage=1 00:35:45.527 --rc genhtml_function_coverage=1 00:35:45.527 --rc genhtml_legend=1 00:35:45.527 --rc geninfo_all_blocks=1 00:35:45.527 --rc geninfo_unexecuted_blocks=1 00:35:45.527 00:35:45.527 ' 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:45.527 04:22:44 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:45.527 04:22:44 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.527 04:22:44 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.527 04:22:44 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.527 04:22:44 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.527 04:22:44 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.527 04:22:44 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.527 04:22:44 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:45.527 04:22:44 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:45.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AAH37HpwkE 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AAH37HpwkE 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AAH37HpwkE 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AAH37HpwkE 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8cZqH8DUgR 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:45.527 04:22:44 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8cZqH8DUgR 00:35:45.527 04:22:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8cZqH8DUgR 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.8cZqH8DUgR 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@30 -- # tgtpid=335373 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@32 -- # waitforlisten 335373 00:35:45.527 04:22:44 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:45.527 04:22:44 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 335373 ']' 00:35:45.527 04:22:44 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.527 04:22:44 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.527 04:22:44 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.527 04:22:44 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.527 04:22:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:45.527 [2024-12-10 04:22:44.696761] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:45.527 [2024-12-10 04:22:44.696811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335373 ] 00:35:45.527 [2024-12-10 04:22:44.771528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.786 [2024-12-10 04:22:44.813059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.786 04:22:45 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.786 04:22:45 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:45.786 04:22:45 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:45.786 04:22:45 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.786 04:22:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:45.787 [2024-12-10 04:22:45.022698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.787 null0 00:35:45.787 [2024-12-10 04:22:45.054743] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:45.787 [2024-12-10 04:22:45.055005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.045 04:22:45 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:46.045 [2024-12-10 04:22:45.082806] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:46.045 request: 00:35:46.045 { 00:35:46.045 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:46.045 "secure_channel": false, 00:35:46.045 "listen_address": { 00:35:46.045 "trtype": "tcp", 00:35:46.045 "traddr": "127.0.0.1", 00:35:46.045 "trsvcid": "4420" 00:35:46.045 }, 00:35:46.045 "method": "nvmf_subsystem_add_listener", 00:35:46.045 "req_id": 1 00:35:46.045 } 00:35:46.045 Got JSON-RPC error response 00:35:46.045 response: 00:35:46.045 { 00:35:46.045 "code": -32602, 00:35:46.045 "message": "Invalid parameters" 00:35:46.045 } 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:46.045 04:22:45 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:46.045 04:22:45 keyring_file -- keyring/file.sh@47 -- # bperfpid=335383 00:35:46.045 04:22:45 keyring_file -- keyring/file.sh@49 -- # waitforlisten 335383 /var/tmp/bperf.sock 00:35:46.045 04:22:45 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:46.046 04:22:45 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 335383 ']' 00:35:46.046 04:22:45 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:46.046 04:22:45 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.046 04:22:45 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:46.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:46.046 04:22:45 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.046 04:22:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:46.046 [2024-12-10 04:22:45.134088] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:46.046 [2024-12-10 04:22:45.134128] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335383 ] 00:35:46.046 [2024-12-10 04:22:45.188382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.046 [2024-12-10 04:22:45.227296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.046 04:22:45 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.046 04:22:45 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:46.046 04:22:45 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AAH37HpwkE 00:35:46.304 04:22:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AAH37HpwkE 00:35:46.304 04:22:45 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8cZqH8DUgR 00:35:46.304 04:22:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8cZqH8DUgR 00:35:46.563 04:22:45 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:46.563 04:22:45 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:46.563 04:22:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.563 04:22:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.563 04:22:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.822 04:22:45 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.AAH37HpwkE == \/\t\m\p\/\t\m\p\.\A\A\H\3\7\H\p\w\k\E ]] 00:35:46.822 04:22:45 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:46.822 04:22:45 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:46.822 04:22:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.822 04:22:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.822 04:22:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:47.100 04:22:46 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.8cZqH8DUgR == \/\t\m\p\/\t\m\p\.\8\c\Z\q\H\8\D\U\g\R ]] 00:35:47.100 04:22:46 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:47.100 04:22:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:47.100 04:22:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.100 04:22:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.100 04:22:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.100 04:22:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.100 04:22:46 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:47.100 04:22:46 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:47.100 04:22:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:47.100 04:22:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.100 04:22:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.100 04:22:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:47.100 04:22:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.359 04:22:46 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:47.359 04:22:46 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:47.359 04:22:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:47.618 [2024-12-10 04:22:46.681454] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:47.618 nvme0n1 00:35:47.618 04:22:46 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:47.618 04:22:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:47.618 04:22:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.618 04:22:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.618 04:22:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.618 04:22:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.876 04:22:46 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:47.876 04:22:46 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:47.876 04:22:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:47.876 04:22:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.876 04:22:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.876 04:22:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.876 04:22:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:47.876 04:22:47 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:47.876 04:22:47 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:48.134 Running I/O for 1 seconds... 00:35:49.071 19487.00 IOPS, 76.12 MiB/s 00:35:49.071 Latency(us) 00:35:49.071 [2024-12-10T03:22:48.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.071 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:49.071 nvme0n1 : 1.00 19532.83 76.30 0.00 0.00 6541.00 4181.82 14480.34 00:35:49.071 [2024-12-10T03:22:48.357Z] =================================================================================================================== 00:35:49.071 [2024-12-10T03:22:48.357Z] Total : 19532.83 76.30 0.00 0.00 6541.00 4181.82 14480.34 00:35:49.071 { 00:35:49.071 "results": [ 00:35:49.071 { 00:35:49.071 "job": "nvme0n1", 00:35:49.071 "core_mask": "0x2", 00:35:49.071 "workload": "randrw", 00:35:49.071 "percentage": 50, 00:35:49.071 "status": "finished", 00:35:49.071 "queue_depth": 128, 00:35:49.071 "io_size": 4096, 00:35:49.071 "runtime": 1.004207, 00:35:49.071 "iops": 19532.82540352736, 00:35:49.071 "mibps": 76.30009923252875, 00:35:49.071 "io_failed": 0, 00:35:49.071 "io_timeout": 0, 00:35:49.071 "avg_latency_us": 6540.999217799789, 00:35:49.071 "min_latency_us": 4181.820952380953, 00:35:49.071 "max_latency_us": 14480.335238095238 00:35:49.071 } 00:35:49.071 ], 00:35:49.071 "core_count": 1 00:35:49.071 } 00:35:49.071 04:22:48 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:49.071 04:22:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:49.330 04:22:48 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:49.330 04:22:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:49.330 04:22:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.330 04:22:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.330 04:22:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:49.330 04:22:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.588 04:22:48 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:49.588 04:22:48 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:49.588 04:22:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:49.588 04:22:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.588 04:22:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.588 04:22:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.588 04:22:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:49.847 04:22:48 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:49.847 04:22:48 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:49.847 04:22:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:49.847 04:22:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:49.847 04:22:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:49.847 04:22:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:49.847 04:22:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:49.847 04:22:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:49.847 04:22:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:49.847 04:22:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:49.847 [2024-12-10 04:22:49.056399] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:49.847 [2024-12-10 04:22:49.056955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaab470 (107): Transport endpoint is not connected 00:35:49.847 [2024-12-10 04:22:49.057950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaab470 (9): Bad file descriptor 00:35:49.847 [2024-12-10 04:22:49.058951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:49.847 [2024-12-10 04:22:49.058961] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:49.847 [2024-12-10 04:22:49.058968] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:49.847 [2024-12-10 04:22:49.058977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:49.847 request: 00:35:49.847 { 00:35:49.847 "name": "nvme0", 00:35:49.847 "trtype": "tcp", 00:35:49.847 "traddr": "127.0.0.1", 00:35:49.847 "adrfam": "ipv4", 00:35:49.847 "trsvcid": "4420", 00:35:49.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.847 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.847 "prchk_reftag": false, 00:35:49.847 "prchk_guard": false, 00:35:49.847 "hdgst": false, 00:35:49.847 "ddgst": false, 00:35:49.847 "psk": "key1", 00:35:49.847 "allow_unrecognized_csi": false, 00:35:49.847 "method": "bdev_nvme_attach_controller", 00:35:49.847 "req_id": 1 00:35:49.847 } 00:35:49.847 Got JSON-RPC error response 00:35:49.847 response: 00:35:49.847 { 00:35:49.847 "code": -5, 00:35:49.847 "message": "Input/output error" 00:35:49.847 } 00:35:49.847 04:22:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:49.847 04:22:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:49.847 04:22:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:49.847 04:22:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:49.847 04:22:49 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:49.847 04:22:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:49.847 04:22:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.847 04:22:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:49.847 04:22:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.847 04:22:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.106 04:22:49 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:50.106 04:22:49 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:50.106 04:22:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:50.106 04:22:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:50.106 04:22:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.106 04:22:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.106 04:22:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:50.364 04:22:49 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:50.364 04:22:49 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:50.364 04:22:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:50.623 04:22:49 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:50.623 04:22:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:50.623 04:22:49 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:50.623 04:22:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.623 04:22:49 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:50.881 04:22:50 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:50.881 04:22:50 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.AAH37HpwkE 00:35:50.881 04:22:50 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AAH37HpwkE 00:35:50.881 04:22:50 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:50.881 04:22:50 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AAH37HpwkE 00:35:50.881 04:22:50 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:50.881 04:22:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.881 04:22:50 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:50.881 04:22:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.881 04:22:50 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AAH37HpwkE 00:35:50.881 04:22:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AAH37HpwkE 00:35:51.140 [2024-12-10 04:22:50.223075] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AAH37HpwkE': 0100660 00:35:51.140 [2024-12-10 04:22:50.223108] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:51.140 request: 00:35:51.140 { 00:35:51.140 "name": "key0", 00:35:51.140 "path": "/tmp/tmp.AAH37HpwkE", 00:35:51.140 "method": "keyring_file_add_key", 00:35:51.140 "req_id": 1 00:35:51.140 } 00:35:51.140 Got JSON-RPC error response 00:35:51.140 response: 00:35:51.140 { 00:35:51.140 "code": -1, 00:35:51.140 "message": "Operation not permitted" 00:35:51.140 } 00:35:51.140 04:22:50 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:51.141 04:22:50 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:51.141 04:22:50 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:51.141 04:22:50 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:51.141 04:22:50 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.AAH37HpwkE 00:35:51.141 04:22:50 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AAH37HpwkE 00:35:51.141 04:22:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AAH37HpwkE 00:35:51.399 04:22:50 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.AAH37HpwkE 00:35:51.399 04:22:50 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:51.399 04:22:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:51.399 04:22:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:51.400 04:22:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.400 04:22:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:51.400 04:22:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.400 04:22:50 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:51.400 04:22:50 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.400 04:22:50 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:51.400 04:22:50 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.400 04:22:50 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:51.400 04:22:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:51.400 04:22:50 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:51.400 04:22:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:51.400 04:22:50 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.400 04:22:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.658 [2024-12-10 04:22:50.828667] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AAH37HpwkE': No such file or directory 00:35:51.659 [2024-12-10 04:22:50.828688] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:51.659 [2024-12-10 04:22:50.828704] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:51.659 [2024-12-10 04:22:50.828710] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:51.659 [2024-12-10 04:22:50.828717] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:51.659 [2024-12-10 04:22:50.828723] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:51.659 request: 00:35:51.659 { 00:35:51.659 "name": "nvme0", 00:35:51.659 "trtype": "tcp", 00:35:51.659 "traddr": "127.0.0.1", 00:35:51.659 "adrfam": "ipv4", 00:35:51.659 "trsvcid": "4420", 00:35:51.659 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:51.659 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:51.659 "prchk_reftag": false, 00:35:51.659 "prchk_guard": false, 00:35:51.659 "hdgst": false, 00:35:51.659 "ddgst": false, 00:35:51.659 "psk": "key0", 00:35:51.659 "allow_unrecognized_csi": false, 00:35:51.659 "method": "bdev_nvme_attach_controller", 00:35:51.659 "req_id": 1 00:35:51.659 } 00:35:51.659 Got JSON-RPC error response 00:35:51.659 response: 00:35:51.659 { 00:35:51.659 "code": -19, 00:35:51.659 "message": "No such device" 00:35:51.659 } 00:35:51.659 04:22:50 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:51.659 04:22:50 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:51.659 04:22:50 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:51.659 04:22:50 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:51.659 04:22:50 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:51.659 04:22:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:51.917 04:22:51 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:51.917 04:22:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:51.917 04:22:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:51.917 04:22:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:51.917 04:22:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:51.917 04:22:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:51.917 04:22:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.v00dhDncsE 00:35:51.917 04:22:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:51.917 04:22:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:51.917 04:22:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:51.917 04:22:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:51.917 04:22:51 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:51.917 04:22:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:51.917 04:22:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:51.917 04:22:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.v00dhDncsE 00:35:51.917 04:22:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.v00dhDncsE 00:35:51.917 04:22:51 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.v00dhDncsE 00:35:51.917 04:22:51 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v00dhDncsE 00:35:51.917 04:22:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v00dhDncsE 00:35:52.176 04:22:51 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:52.176 04:22:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:52.434 nvme0n1 00:35:52.434 04:22:51 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:52.434 04:22:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:52.434 04:22:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.434 04:22:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.434 04:22:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.434 04:22:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.692 04:22:51 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:52.692 04:22:51 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:52.692 04:22:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:52.692 04:22:51 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:52.692 04:22:51 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:52.692 04:22:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.692 04:22:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.692 04:22:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.950 04:22:52 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:52.950 04:22:52 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:52.950 04:22:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:52.950 04:22:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.950 04:22:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.950 04:22:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.950 04:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.209 04:22:52 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:53.209 04:22:52 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:53.209 04:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:53.467 04:22:52 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:53.467 04:22:52 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:53.467 04:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.467 04:22:52 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:53.467 04:22:52 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v00dhDncsE 00:35:53.467 04:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v00dhDncsE 00:35:53.726 04:22:52 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.8cZqH8DUgR 00:35:53.726 04:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.8cZqH8DUgR 00:35:53.985 04:22:53 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:53.985 04:22:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:54.244 nvme0n1 00:35:54.244 04:22:53 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:54.244 04:22:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:54.502 04:22:53 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:54.502 "subsystems": [ 00:35:54.502 { 00:35:54.502 "subsystem": "keyring", 00:35:54.502 "config": [ 00:35:54.502 { 00:35:54.502 "method": "keyring_file_add_key", 00:35:54.502 "params": { 00:35:54.502 "name": "key0", 00:35:54.502 "path": "/tmp/tmp.v00dhDncsE" 00:35:54.502 } 00:35:54.502 }, 00:35:54.502 { 00:35:54.502 "method": "keyring_file_add_key", 00:35:54.502 "params": { 00:35:54.502 "name": "key1", 00:35:54.502 "path": "/tmp/tmp.8cZqH8DUgR" 00:35:54.502 } 00:35:54.502 } 00:35:54.502 ] 00:35:54.502 }, 00:35:54.502 { 00:35:54.502 "subsystem": "iobuf", 00:35:54.502 "config": [ 00:35:54.502 { 00:35:54.502 "method": "iobuf_set_options", 00:35:54.502 "params": { 00:35:54.502 "small_pool_count": 8192, 00:35:54.502 "large_pool_count": 1024, 00:35:54.502 "small_bufsize": 8192, 00:35:54.502 "large_bufsize": 135168, 00:35:54.502 "enable_numa": false 00:35:54.502 } 00:35:54.502 } 00:35:54.502 ] 00:35:54.502 }, 00:35:54.502 { 00:35:54.502 "subsystem": "sock", 00:35:54.502 "config": [ 00:35:54.502 { 00:35:54.502 "method": "sock_set_default_impl", 00:35:54.502 "params": { 00:35:54.502 "impl_name": "posix" 00:35:54.502 } 00:35:54.502 }, 00:35:54.502 { 00:35:54.502 "method": "sock_impl_set_options", 00:35:54.502 "params": { 00:35:54.502 "impl_name": "ssl", 00:35:54.502 "recv_buf_size": 4096, 00:35:54.502 "send_buf_size": 4096, 00:35:54.502 "enable_recv_pipe": true, 00:35:54.502 "enable_quickack": false, 00:35:54.502 "enable_placement_id": 0, 00:35:54.502 "enable_zerocopy_send_server": true, 00:35:54.502 "enable_zerocopy_send_client": false, 00:35:54.502 "zerocopy_threshold": 0, 00:35:54.502 "tls_version": 0, 00:35:54.502 "enable_ktls": false 00:35:54.502 } 00:35:54.502 }, 00:35:54.502 { 00:35:54.502 "method": "sock_impl_set_options", 00:35:54.502 "params": { 00:35:54.502 "impl_name": "posix", 00:35:54.502 "recv_buf_size": 2097152, 00:35:54.502 "send_buf_size": 2097152, 00:35:54.503 "enable_recv_pipe": true, 00:35:54.503 "enable_quickack": false, 00:35:54.503 "enable_placement_id": 0, 00:35:54.503 "enable_zerocopy_send_server": true, 00:35:54.503 "enable_zerocopy_send_client": false, 00:35:54.503 "zerocopy_threshold": 0, 00:35:54.503 "tls_version": 0, 00:35:54.503 "enable_ktls": false 00:35:54.503 } 00:35:54.503 } 00:35:54.503 ] 00:35:54.503 }, 00:35:54.503 { 00:35:54.503 "subsystem": "vmd", 00:35:54.503 "config": [] 00:35:54.503 }, 00:35:54.503 { 00:35:54.503 "subsystem": "accel", 00:35:54.503 "config": [ 00:35:54.503 { 00:35:54.503 "method": "accel_set_options", 00:35:54.503 "params": { 00:35:54.503 "small_cache_size": 128, 00:35:54.503 "large_cache_size": 16, 00:35:54.503 "task_count": 2048, 00:35:54.503 "sequence_count": 2048, 00:35:54.503 "buf_count": 2048 00:35:54.503 } 00:35:54.503 } 00:35:54.503 ] 00:35:54.503 }, 00:35:54.503 { 00:35:54.503 "subsystem": "bdev", 00:35:54.503 "config": [ 00:35:54.503 { 00:35:54.503 "method": "bdev_set_options", 00:35:54.503 "params": { 00:35:54.503 "bdev_io_pool_size": 65535, 00:35:54.503 "bdev_io_cache_size": 256, 00:35:54.503 "bdev_auto_examine": true, 00:35:54.503 "iobuf_small_cache_size": 128, 00:35:54.503 "iobuf_large_cache_size": 16 00:35:54.503 } 00:35:54.503 }, 00:35:54.503 { 00:35:54.503 "method": "bdev_raid_set_options", 00:35:54.503 "params": { 00:35:54.503 "process_window_size_kb": 1024, 00:35:54.503 "process_max_bandwidth_mb_sec": 0 00:35:54.503 } 00:35:54.503 }, 00:35:54.503 { 00:35:54.503 "method": "bdev_iscsi_set_options", 00:35:54.503 "params": { 00:35:54.503 "timeout_sec": 30 00:35:54.503 } 00:35:54.503 }, 00:35:54.503 { 00:35:54.503 "method": "bdev_nvme_set_options", 00:35:54.503 "params": { 00:35:54.503 "action_on_timeout": "none", 00:35:54.503 "timeout_us": 0, 00:35:54.503 "timeout_admin_us": 0, 00:35:54.503 "keep_alive_timeout_ms": 10000, 00:35:54.503 "arbitration_burst": 0, 00:35:54.503 "low_priority_weight": 0, 00:35:54.503 "medium_priority_weight": 0, 00:35:54.503 "high_priority_weight": 0, 00:35:54.503 "nvme_adminq_poll_period_us": 10000, 00:35:54.503 "nvme_ioq_poll_period_us": 0, 00:35:54.503 "io_queue_requests": 512, 00:35:54.503 "delay_cmd_submit": true, 00:35:54.503 "transport_retry_count": 4, 00:35:54.503 "bdev_retry_count": 3, 00:35:54.503 "transport_ack_timeout": 0, 00:35:54.503 "ctrlr_loss_timeout_sec": 0, 00:35:54.503 "reconnect_delay_sec": 0, 00:35:54.503 "fast_io_fail_timeout_sec": 0, 00:35:54.503 "disable_auto_failback": false, 00:35:54.503 "generate_uuids": false, 00:35:54.503 "transport_tos": 0, 00:35:54.503 "nvme_error_stat": false, 00:35:54.503 "rdma_srq_size": 0, 00:35:54.503 "io_path_stat": false, 00:35:54.503 "allow_accel_sequence": false, 00:35:54.503 "rdma_max_cq_size": 0, 00:35:54.503 "rdma_cm_event_timeout_ms": 0, 00:35:54.503 "dhchap_digests": [ 00:35:54.503 "sha256", 00:35:54.503 "sha384", 00:35:54.503 "sha512" 00:35:54.503 ], 00:35:54.503 "dhchap_dhgroups": [ 00:35:54.503 "null", 00:35:54.503 "ffdhe2048", 00:35:54.503 "ffdhe3072", 00:35:54.503 "ffdhe4096", 00:35:54.503 "ffdhe6144", 00:35:54.503 "ffdhe8192" 00:35:54.503 ] 00:35:54.503 } 00:35:54.503 }, 00:35:54.503 { 00:35:54.503 "method": "bdev_nvme_attach_controller", 00:35:54.503 "params": { 00:35:54.503 "name": "nvme0", 00:35:54.503 "trtype": "TCP", 00:35:54.503 "adrfam": "IPv4", 00:35:54.503 "traddr": "127.0.0.1", 00:35:54.503 "trsvcid": "4420", 00:35:54.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.503 "prchk_reftag": false, 00:35:54.503 "prchk_guard": false, 00:35:54.503 "ctrlr_loss_timeout_sec": 0, 00:35:54.503 "reconnect_delay_sec": 0, 00:35:54.503 "fast_io_fail_timeout_sec": 0, 00:35:54.503 "psk": "key0", 00:35:54.503 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.503 "hdgst": false, 00:35:54.503 "ddgst": false, 00:35:54.503 "multipath": "multipath" 00:35:54.503 } 00:35:54.503 }, 00:35:54.503 { 00:35:54.503 "method": "bdev_nvme_set_hotplug", 00:35:54.503 "params": { 00:35:54.503 "period_us": 100000, 00:35:54.503 "enable": false 00:35:54.503 } 00:35:54.503 }, 00:35:54.503 { 00:35:54.503 "method": "bdev_wait_for_examine" 00:35:54.503 } 00:35:54.503 ] 00:35:54.503 }, 00:35:54.503 { 00:35:54.503 "subsystem": "nbd", 00:35:54.503 "config": [] 00:35:54.503 } 00:35:54.503 ] 00:35:54.503 }' 00:35:54.503 04:22:53 keyring_file -- keyring/file.sh@115 -- # killprocess 335383 00:35:54.503 04:22:53 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 335383 ']' 00:35:54.503 04:22:53 keyring_file -- common/autotest_common.sh@958 -- # kill -0 335383 00:35:54.503 04:22:53 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:54.503 04:22:53 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.503 04:22:53 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335383 00:35:54.503 04:22:53 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:54.503 04:22:53 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:54.503 04:22:53 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335383' 00:35:54.503 killing process with pid 335383 00:35:54.503 04:22:53 keyring_file -- common/autotest_common.sh@973 -- # kill 335383 00:35:54.503 Received shutdown signal, test time was about 1.000000 seconds 00:35:54.503 00:35:54.503 Latency(us) 00:35:54.503 [2024-12-10T03:22:53.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.503 [2024-12-10T03:22:53.789Z] =================================================================================================================== 00:35:54.503 [2024-12-10T03:22:53.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:54.503 04:22:53 keyring_file -- common/autotest_common.sh@978 -- # wait 335383 00:35:54.762 04:22:53 keyring_file -- keyring/file.sh@118 -- # bperfpid=336860 00:35:54.762 04:22:53 keyring_file -- keyring/file.sh@120 -- # waitforlisten 336860 /var/tmp/bperf.sock 00:35:54.762 04:22:53 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 336860 ']' 00:35:54.762 04:22:53 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:54.762 04:22:53 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:54.762 04:22:53 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.762 04:22:53 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:54.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:54.762 04:22:53 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:54.762 "subsystems": [ 00:35:54.762 { 00:35:54.762 "subsystem": "keyring", 00:35:54.762 "config": [ 00:35:54.762 { 00:35:54.762 "method": "keyring_file_add_key", 00:35:54.762 "params": { 00:35:54.762 "name": "key0", 00:35:54.762 "path": "/tmp/tmp.v00dhDncsE" 00:35:54.762 } 00:35:54.762 }, 00:35:54.762 { 00:35:54.762 "method": "keyring_file_add_key", 00:35:54.762 "params": { 00:35:54.762 "name": "key1", 00:35:54.762 "path": "/tmp/tmp.8cZqH8DUgR" 00:35:54.762 } 00:35:54.762 } 00:35:54.762 ] 00:35:54.762 }, 00:35:54.762 { 00:35:54.762 "subsystem": "iobuf", 00:35:54.762 "config": [ 00:35:54.762 { 00:35:54.762 "method": "iobuf_set_options", 00:35:54.762 "params": { 00:35:54.762 "small_pool_count": 8192, 00:35:54.762 "large_pool_count": 1024, 00:35:54.762 "small_bufsize": 8192, 00:35:54.762 "large_bufsize": 135168, 00:35:54.762 "enable_numa": false 00:35:54.762 } 00:35:54.762 } 00:35:54.762 ] 00:35:54.762 }, 00:35:54.762 { 00:35:54.762 "subsystem": "sock", 00:35:54.762 "config": [ 00:35:54.762 { 00:35:54.762 "method": "sock_set_default_impl", 00:35:54.762 "params": { 00:35:54.762 "impl_name": "posix" 00:35:54.762 } 00:35:54.762 }, 00:35:54.762 { 00:35:54.762 "method": "sock_impl_set_options", 00:35:54.762 "params": { 00:35:54.762 "impl_name": "ssl", 00:35:54.762 "recv_buf_size": 4096, 00:35:54.762 "send_buf_size": 4096, 00:35:54.762 "enable_recv_pipe": true, 00:35:54.762 "enable_quickack": false, 00:35:54.762 "enable_placement_id": 0, 00:35:54.762 "enable_zerocopy_send_server": true, 00:35:54.762 "enable_zerocopy_send_client": false, 00:35:54.762 "zerocopy_threshold": 0, 00:35:54.762 "tls_version": 0, 00:35:54.762 "enable_ktls": false 00:35:54.762 } 00:35:54.762 }, 00:35:54.762 { 00:35:54.762 "method": "sock_impl_set_options", 00:35:54.762 "params": { 00:35:54.762 "impl_name": "posix", 00:35:54.762 "recv_buf_size": 2097152, 00:35:54.762 "send_buf_size": 2097152, 00:35:54.762 "enable_recv_pipe": true, 00:35:54.762 "enable_quickack": false, 00:35:54.762 "enable_placement_id": 0, 00:35:54.762 "enable_zerocopy_send_server": true, 00:35:54.762 "enable_zerocopy_send_client": false, 00:35:54.762 "zerocopy_threshold": 0, 00:35:54.762 "tls_version": 0, 00:35:54.762 "enable_ktls": false 00:35:54.762 } 00:35:54.762 } 00:35:54.762 ] 00:35:54.762 }, 00:35:54.762 { 00:35:54.762 "subsystem": "vmd", 00:35:54.762 "config": [] 00:35:54.762 }, 00:35:54.762 { 00:35:54.762 "subsystem": "accel", 00:35:54.762 "config": [ 00:35:54.762 { 00:35:54.762 "method": "accel_set_options", 00:35:54.762 "params": { 00:35:54.762 "small_cache_size": 128, 00:35:54.762 "large_cache_size": 16, 00:35:54.762 "task_count": 2048, 00:35:54.762 "sequence_count": 2048, 00:35:54.762 "buf_count": 2048 00:35:54.762 } 00:35:54.762 } 00:35:54.762 ] 00:35:54.762 }, 00:35:54.762 { 00:35:54.762 "subsystem": "bdev", 00:35:54.762 "config": [ 00:35:54.762 { 00:35:54.762 "method": "bdev_set_options", 00:35:54.762 "params": { 00:35:54.762 "bdev_io_pool_size": 65535, 00:35:54.762 "bdev_io_cache_size": 256, 00:35:54.762 "bdev_auto_examine": true, 00:35:54.762 "iobuf_small_cache_size": 128, 00:35:54.762 "iobuf_large_cache_size": 16 00:35:54.762 } 00:35:54.762 }, 00:35:54.762 { 00:35:54.762 "method": "bdev_raid_set_options", 00:35:54.762 "params": { 00:35:54.762 "process_window_size_kb": 1024, 00:35:54.762 "process_max_bandwidth_mb_sec": 0 00:35:54.762 } 00:35:54.762 }, 00:35:54.762 { 00:35:54.762 "method": "bdev_iscsi_set_options", 00:35:54.762 "params": { 00:35:54.762 "timeout_sec": 30 00:35:54.762 } 00:35:54.762 }, 00:35:54.762 { 00:35:54.762 "method": "bdev_nvme_set_options", 00:35:54.762 "params": { 00:35:54.762 "action_on_timeout": "none", 00:35:54.762 "timeout_us": 0, 00:35:54.762 "timeout_admin_us": 0, 00:35:54.762 "keep_alive_timeout_ms": 10000, 00:35:54.762 "arbitration_burst": 0, 00:35:54.762 "low_priority_weight": 0, 00:35:54.762 "medium_priority_weight": 0, 00:35:54.763 "high_priority_weight": 0, 00:35:54.763 "nvme_adminq_poll_period_us": 10000, 00:35:54.763 "nvme_ioq_poll_period_us": 0, 00:35:54.763 "io_queue_requests": 512, 00:35:54.763 "delay_cmd_submit": true, 00:35:54.763 "transport_retry_count": 4, 00:35:54.763 "bdev_retry_count": 3, 00:35:54.763 "transport_ack_timeout": 0, 00:35:54.763 "ctrlr_loss_timeout_sec": 0, 00:35:54.763 "reconnect_delay_sec": 0, 00:35:54.763 "fast_io_fail_timeout_sec": 0, 00:35:54.763 "disable_auto_failback": false, 00:35:54.763 "generate_uuids": false, 00:35:54.763 "transport_tos": 0, 00:35:54.763 "nvme_error_stat": false, 00:35:54.763 "rdma_srq_size": 0, 00:35:54.763 "io_path_stat": false, 00:35:54.763 "allow_accel_sequence": false, 00:35:54.763 "rdma_max_cq_size": 0, 00:35:54.763 "rdma_cm_event_timeout_ms": 0, 00:35:54.763 "dhchap_digests": [ 00:35:54.763 "sha256", 00:35:54.763 "sha384", 00:35:54.763 "sha512" 00:35:54.763 ], 00:35:54.763 "dhchap_dhgroups": [ 00:35:54.763 "null", 00:35:54.763 "ffdhe2048", 00:35:54.763 "ffdhe3072", 00:35:54.763 "ffdhe4096", 00:35:54.763 "ffdhe6144", 00:35:54.763 "ffdhe8192" 00:35:54.763 ] 00:35:54.763 } 00:35:54.763 }, 00:35:54.763 { 00:35:54.763 "method": "bdev_nvme_attach_controller", 00:35:54.763 "params": { 00:35:54.763 "name": "nvme0", 00:35:54.763 "trtype": "TCP", 00:35:54.763 "adrfam": "IPv4", 00:35:54.763 "traddr": "127.0.0.1", 00:35:54.763 "trsvcid": "4420", 00:35:54.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.763 "prchk_reftag": false, 00:35:54.763 "prchk_guard": false, 00:35:54.763 "ctrlr_loss_timeout_sec": 0, 00:35:54.763 "reconnect_delay_sec": 0, 00:35:54.763 "fast_io_fail_timeout_sec": 0, 00:35:54.763 "psk": "key0", 00:35:54.763 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.763 "hdgst": false, 00:35:54.763 "ddgst": false, 00:35:54.763 "multipath": "multipath" 00:35:54.763 } 00:35:54.763 }, 00:35:54.763 { 00:35:54.763 "method": "bdev_nvme_set_hotplug", 00:35:54.763 "params": { 00:35:54.763 "period_us": 100000, 00:35:54.763 "enable": false 00:35:54.763 } 00:35:54.763 }, 00:35:54.763 { 00:35:54.763 "method": "bdev_wait_for_examine" 00:35:54.763 } 00:35:54.763 ] 00:35:54.763 }, 00:35:54.763 { 00:35:54.763 "subsystem": "nbd", 00:35:54.763 "config": [] 00:35:54.763 } 00:35:54.763 ] 00:35:54.763 }' 00:35:54.763 04:22:53 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.763 04:22:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:54.763 [2024-12-10 04:22:53.880797] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:54.763 [2024-12-10 04:22:53.880848] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336860 ] 00:35:54.763 [2024-12-10 04:22:53.955570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.763 [2024-12-10 04:22:53.995637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:55.022 [2024-12-10 04:22:54.155898] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:55.588 04:22:54 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:55.589 04:22:54 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:55.589 04:22:54 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:55.589 04:22:54 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:55.589 04:22:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.848 04:22:54 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:55.848 04:22:54 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:55.848 04:22:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.848 04:22:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:55.848 04:22:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.848 04:22:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:55.848 04:22:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.848 04:22:55 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:55.848 04:22:55 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:55.848 04:22:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.848 04:22:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:55.848 04:22:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.848 04:22:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:55.848 04:22:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:56.106 04:22:55 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:56.106 04:22:55 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:56.106 04:22:55 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:56.106 04:22:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:56.364 04:22:55 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:56.364 04:22:55 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:56.364 04:22:55 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.v00dhDncsE /tmp/tmp.8cZqH8DUgR 00:35:56.364 04:22:55 keyring_file -- keyring/file.sh@20 -- # killprocess 336860 00:35:56.364 04:22:55 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 336860 ']' 00:35:56.364 04:22:55 keyring_file -- common/autotest_common.sh@958 -- # kill -0 336860 00:35:56.364 04:22:55 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:56.364 04:22:55 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.364 04:22:55 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336860 00:35:56.364 04:22:55 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:56.364 04:22:55 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:56.364 04:22:55 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336860' 00:35:56.364 killing process with pid 336860 00:35:56.364 04:22:55 keyring_file -- common/autotest_common.sh@973 -- # kill 336860 00:35:56.364 Received shutdown signal, test time was about 1.000000 seconds 00:35:56.364 00:35:56.364 Latency(us) 00:35:56.364 [2024-12-10T03:22:55.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.364 [2024-12-10T03:22:55.650Z] =================================================================================================================== 00:35:56.364 [2024-12-10T03:22:55.650Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:56.364 04:22:55 keyring_file -- common/autotest_common.sh@978 -- # wait 336860 00:35:56.623 04:22:55 keyring_file -- keyring/file.sh@21 -- # killprocess 335373 00:35:56.623 04:22:55 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 335373 ']' 00:35:56.623 04:22:55 keyring_file -- common/autotest_common.sh@958 -- # kill -0 335373 00:35:56.623 04:22:55 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:56.623 04:22:55 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.623 04:22:55 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335373 00:35:56.623 04:22:55 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:56.623 04:22:55 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:56.623 04:22:55 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335373' 00:35:56.623 killing process with pid 335373 00:35:56.623 04:22:55 keyring_file -- common/autotest_common.sh@973 -- # kill 335373 00:35:56.623 04:22:55 keyring_file -- common/autotest_common.sh@978 -- # wait 335373 00:35:56.882 00:35:56.882 real 0m11.739s 00:35:56.882 user 0m29.141s 00:35:56.882 sys 0m2.728s 00:35:56.882 04:22:56 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.882 04:22:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:56.882 ************************************ 00:35:56.882 END TEST keyring_file 00:35:56.882 ************************************ 00:35:56.882 04:22:56 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:56.882 04:22:56 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:56.882 04:22:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:56.882 04:22:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.882 04:22:56 -- common/autotest_common.sh@10 -- # set +x 00:35:56.882 ************************************ 00:35:56.882 START TEST keyring_linux 00:35:56.882 ************************************ 00:35:56.882 04:22:56 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:56.882 Joined session keyring: 1008324941 00:35:57.141 * Looking for test storage... 00:35:57.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:57.141 04:22:56 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:57.141 04:22:56 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:57.141 04:22:56 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:57.141 04:22:56 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:57.141 04:22:56 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:57.141 04:22:56 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:57.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.141 --rc genhtml_branch_coverage=1 00:35:57.141 --rc genhtml_function_coverage=1 00:35:57.141 --rc genhtml_legend=1 00:35:57.141 --rc geninfo_all_blocks=1 00:35:57.141 --rc geninfo_unexecuted_blocks=1 00:35:57.141 00:35:57.141 ' 00:35:57.141 04:22:56 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:57.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.141 --rc genhtml_branch_coverage=1 00:35:57.141 --rc genhtml_function_coverage=1 00:35:57.141 --rc genhtml_legend=1 00:35:57.141 --rc geninfo_all_blocks=1 00:35:57.141 --rc geninfo_unexecuted_blocks=1 00:35:57.141 00:35:57.141 ' 00:35:57.141 04:22:56 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:57.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.141 --rc genhtml_branch_coverage=1 00:35:57.141 --rc genhtml_function_coverage=1 00:35:57.141 --rc genhtml_legend=1 00:35:57.141 --rc geninfo_all_blocks=1 00:35:57.141 --rc geninfo_unexecuted_blocks=1 00:35:57.141 00:35:57.141 ' 00:35:57.141 04:22:56 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:57.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.141 --rc genhtml_branch_coverage=1 00:35:57.141 --rc genhtml_function_coverage=1 00:35:57.141 --rc genhtml_legend=1 00:35:57.141 --rc geninfo_all_blocks=1 00:35:57.141 --rc geninfo_unexecuted_blocks=1 00:35:57.141 00:35:57.141 ' 00:35:57.141 04:22:56 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:57.141 04:22:56 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.141 04:22:56 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.141 04:22:56 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.141 04:22:56 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.141 04:22:56 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.141 04:22:56 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:57.141 04:22:56 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:57.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:57.141 04:22:56 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:57.141 04:22:56 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:57.141 04:22:56 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:57.141 04:22:56 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:57.141 04:22:56 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:57.141 04:22:56 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:57.142 04:22:56 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:57.142 04:22:56 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:57.142 /tmp/:spdk-test:key0 00:35:57.142 04:22:56 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:57.142 04:22:56 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:57.142 04:22:56 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:57.401 04:22:56 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:57.401 04:22:56 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:57.401 /tmp/:spdk-test:key1 00:35:57.401 04:22:56 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=337406 00:35:57.401 04:22:56 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 337406 00:35:57.401 04:22:56 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:57.401 04:22:56 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 337406 ']' 00:35:57.401 04:22:56 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.401 04:22:56 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:57.401 04:22:56 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.401 04:22:56 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:57.401 04:22:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:57.401 [2024-12-10 04:22:56.484678] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:57.401 [2024-12-10 04:22:56.484725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337406 ] 00:35:57.401 [2024-12-10 04:22:56.556041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.401 [2024-12-10 04:22:56.597095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.660 04:22:56 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:57.660 04:22:56 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:57.660 04:22:56 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:57.660 04:22:56 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.660 04:22:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:57.660 [2024-12-10 04:22:56.814317] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.660 null0 00:35:57.660 [2024-12-10 04:22:56.846383] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:57.660 [2024-12-10 04:22:56.846688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:57.660 04:22:56 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.660 04:22:56 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:57.660 563622404 00:35:57.660 04:22:56 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:57.660 192072105 00:35:57.660 04:22:56 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=337411 00:35:57.660 04:22:56 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:57.660 04:22:56 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 337411 /var/tmp/bperf.sock 00:35:57.660 04:22:56 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 337411 ']' 00:35:57.660 04:22:56 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:57.660 04:22:56 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:57.660 04:22:56 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:57.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:57.660 04:22:56 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:57.660 04:22:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:57.660 [2024-12-10 04:22:56.917009] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:35:57.660 [2024-12-10 04:22:56.917050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337411 ] 00:35:57.919 [2024-12-10 04:22:56.990405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.919 [2024-12-10 04:22:57.031089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.919 04:22:57 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:57.919 04:22:57 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:57.919 04:22:57 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:57.919 04:22:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:58.178 04:22:57 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:58.178 04:22:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:58.437 04:22:57 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:58.437 04:22:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:58.437 [2024-12-10 04:22:57.668548] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:58.696 nvme0n1 00:35:58.696 04:22:57 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:58.696 04:22:57 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:58.696 04:22:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:58.696 04:22:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:58.696 04:22:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:58.696 04:22:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.696 04:22:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:58.696 04:22:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:58.696 04:22:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:58.696 04:22:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.696 04:22:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.696 04:22:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:58.696 04:22:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:58.955 04:22:58 keyring_linux -- keyring/linux.sh@25 -- # sn=563622404 00:35:58.955 04:22:58 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:58.955 04:22:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:58.955 04:22:58 keyring_linux -- keyring/linux.sh@26 -- # [[ 563622404 == \5\6\3\6\2\2\4\0\4 ]] 00:35:58.955 04:22:58 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 563622404 00:35:58.955 04:22:58 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:58.955 04:22:58 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:59.214 Running I/O for 1 seconds... 00:36:00.151 21898.00 IOPS, 85.54 MiB/s 00:36:00.151 Latency(us) 00:36:00.151 [2024-12-10T03:22:59.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.151 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:00.151 nvme0n1 : 1.01 21897.74 85.54 0.00 0.00 5826.50 1888.06 7084.13 00:36:00.151 [2024-12-10T03:22:59.437Z] =================================================================================================================== 00:36:00.151 [2024-12-10T03:22:59.437Z] Total : 21897.74 85.54 0.00 0.00 5826.50 1888.06 7084.13 00:36:00.151 { 00:36:00.151 "results": [ 00:36:00.151 { 00:36:00.151 "job": "nvme0n1", 00:36:00.151 "core_mask": "0x2", 00:36:00.151 "workload": "randread", 00:36:00.151 "status": "finished", 00:36:00.151 "queue_depth": 128, 00:36:00.151 "io_size": 4096, 00:36:00.151 "runtime": 1.005857, 00:36:00.151 "iops": 21897.744908073415, 00:36:00.151 "mibps": 85.53806604716178, 00:36:00.151 "io_failed": 0, 00:36:00.151 "io_timeout": 0, 00:36:00.151 "avg_latency_us": 5826.496020201234, 00:36:00.151 "min_latency_us": 1888.0609523809524, 00:36:00.151 "max_latency_us": 7084.129523809524 00:36:00.151 } 00:36:00.151 ], 00:36:00.151 "core_count": 1 00:36:00.151 } 00:36:00.151 04:22:59 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:00.151 04:22:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:00.410 04:22:59 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:00.410 04:22:59 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:00.410 04:22:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:00.410 04:22:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:00.410 04:22:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:00.410 04:22:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:00.410 04:22:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:00.410 04:22:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:00.410 04:22:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:00.410 04:22:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.410 04:22:59 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:00.410 04:22:59 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.410 04:22:59 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:00.410 04:22:59 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:00.410 04:22:59 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:00.410 04:22:59 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:00.410 04:22:59 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.410 04:22:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.669 [2024-12-10 04:22:59.865238] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:00.669 [2024-12-10 04:22:59.865545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a220 (107): Transport endpoint is not connected 00:36:00.669 [2024-12-10 04:22:59.866540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a220 (9): Bad file descriptor 00:36:00.669 [2024-12-10 04:22:59.867542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:00.669 [2024-12-10 04:22:59.867552] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:00.669 [2024-12-10 04:22:59.867559] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:00.669 [2024-12-10 04:22:59.867567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:00.669 request: 00:36:00.669 { 00:36:00.669 "name": "nvme0", 00:36:00.669 "trtype": "tcp", 00:36:00.669 "traddr": "127.0.0.1", 00:36:00.669 "adrfam": "ipv4", 00:36:00.669 "trsvcid": "4420", 00:36:00.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:00.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:00.669 "prchk_reftag": false, 00:36:00.669 "prchk_guard": false, 00:36:00.669 "hdgst": false, 00:36:00.669 "ddgst": false, 00:36:00.669 "psk": ":spdk-test:key1", 00:36:00.669 "allow_unrecognized_csi": false, 00:36:00.669 "method": "bdev_nvme_attach_controller", 00:36:00.669 "req_id": 1 00:36:00.669 } 00:36:00.669 Got JSON-RPC error response 00:36:00.669 response: 00:36:00.669 { 00:36:00.669 "code": -5, 00:36:00.669 "message": "Input/output error" 00:36:00.669 } 00:36:00.669 04:22:59 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:00.669 04:22:59 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:00.669 04:22:59 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:00.669 04:22:59 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@33 -- # sn=563622404 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 563622404 00:36:00.669 1 links removed 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@33 -- # sn=192072105 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 192072105 00:36:00.669 1 links removed 00:36:00.669 04:22:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 337411 00:36:00.669 04:22:59 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 337411 ']' 00:36:00.669 04:22:59 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 337411 00:36:00.669 04:22:59 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:00.669 04:22:59 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.669 04:22:59 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337411 00:36:00.929 04:22:59 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:00.929 04:22:59 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:00.929 04:22:59 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337411' 00:36:00.929 killing process with pid 337411 00:36:00.929 04:22:59 keyring_linux -- common/autotest_common.sh@973 -- # kill 337411 00:36:00.929 Received shutdown signal, test time was about 1.000000 seconds 00:36:00.929 00:36:00.929 Latency(us) 00:36:00.929 [2024-12-10T03:23:00.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.929 [2024-12-10T03:23:00.215Z] =================================================================================================================== 00:36:00.929 [2024-12-10T03:23:00.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:00.929 04:22:59 keyring_linux -- common/autotest_common.sh@978 -- # wait 337411 00:36:00.929 04:23:00 keyring_linux -- keyring/linux.sh@42 -- # killprocess 337406 00:36:00.929 04:23:00 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 337406 ']' 00:36:00.929 04:23:00 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 337406 00:36:00.929 04:23:00 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:00.929 04:23:00 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.929 04:23:00 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337406 00:36:00.929 04:23:00 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:00.929 04:23:00 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:00.929 04:23:00 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337406' 00:36:00.929 killing process with pid 337406 00:36:00.929 04:23:00 keyring_linux -- common/autotest_common.sh@973 -- # kill 337406 00:36:00.929 04:23:00 keyring_linux -- common/autotest_common.sh@978 -- # wait 337406 00:36:01.188 00:36:01.188 real 0m4.328s 00:36:01.188 user 0m8.195s 00:36:01.188 sys 0m1.398s 00:36:01.447 04:23:00 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.447 04:23:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:01.447 ************************************ 00:36:01.447 END TEST keyring_linux 00:36:01.447 ************************************ 00:36:01.447 04:23:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:01.447 04:23:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:01.447 04:23:00 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:01.447 04:23:00 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:01.447 04:23:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:01.447 04:23:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:01.447 04:23:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:01.447 04:23:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:01.447 04:23:00 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:01.447 04:23:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:01.447 04:23:00 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:01.447 04:23:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:01.447 04:23:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:01.447 04:23:00 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:01.447 04:23:00 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:01.447 04:23:00 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:01.447 04:23:00 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:01.447 04:23:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:01.447 04:23:00 -- common/autotest_common.sh@10 -- # set +x 00:36:01.447 04:23:00 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:01.447 04:23:00 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:01.447 04:23:00 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:01.447 04:23:00 -- common/autotest_common.sh@10 -- # set +x 00:36:06.720 INFO: APP EXITING 00:36:06.720 INFO: killing all VMs 00:36:06.720 INFO: killing vhost app 00:36:06.720 INFO: EXIT DONE 00:36:09.424 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:09.424 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:09.424 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:09.424 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:09.424 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:09.424 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:09.424 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:09.424 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:09.424 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:09.683 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:09.683 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:09.683 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:09.683 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:09.683 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:09.683 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:09.683 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:09.683 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:12.974 Cleaning 00:36:12.974 Removing: /var/run/dpdk/spdk0/config 00:36:12.974 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:12.974 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:12.974 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:12.974 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:12.974 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:12.974 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:12.974 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:12.974 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:12.974 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:12.974 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:12.974 Removing: /var/run/dpdk/spdk1/config 00:36:12.974 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:12.974 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:12.974 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:12.974 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:12.974 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:12.974 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:12.974 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:12.974 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:12.974 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:12.974 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:12.974 Removing: /var/run/dpdk/spdk2/config 00:36:12.974 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:12.974 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:12.974 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:12.974 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:12.974 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:12.974 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:12.974 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:12.974 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:12.974 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:12.974 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:12.974 Removing: /var/run/dpdk/spdk3/config 00:36:12.974 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:12.974 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:12.974 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:12.974 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:12.974 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:12.974 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:12.974 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:12.974 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:12.974 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:12.974 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:12.974 Removing: /var/run/dpdk/spdk4/config 00:36:12.974 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:12.974 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:12.974 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:12.974 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:12.974 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:12.974 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:12.974 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:12.974 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:12.974 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:12.974 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:12.974 Removing: /dev/shm/bdev_svc_trace.1 00:36:12.974 Removing: /dev/shm/nvmf_trace.0 00:36:12.974 Removing: /dev/shm/spdk_tgt_trace.pid4058369 00:36:12.974 Removing: /var/run/dpdk/spdk0 00:36:12.974 Removing: /var/run/dpdk/spdk1 00:36:12.974 Removing: /var/run/dpdk/spdk2 00:36:12.974 Removing: /var/run/dpdk/spdk3 00:36:12.974 Removing: /var/run/dpdk/spdk4 00:36:12.974 Removing: /var/run/dpdk/spdk_pid104011 00:36:12.974 Removing: /var/run/dpdk/spdk_pid108249 00:36:12.975 Removing: /var/run/dpdk/spdk_pid114370 00:36:12.975 Removing: /var/run/dpdk/spdk_pid115592 00:36:12.975 Removing: /var/run/dpdk/spdk_pid116944 00:36:12.975 Removing: /var/run/dpdk/spdk_pid118455 00:36:12.975 Removing: /var/run/dpdk/spdk_pid123579 00:36:12.975 Removing: /var/run/dpdk/spdk_pid127847 00:36:12.975 Removing: /var/run/dpdk/spdk_pid131804 00:36:12.975 Removing: /var/run/dpdk/spdk_pid139275 00:36:12.975 Removing: /var/run/dpdk/spdk_pid139280 00:36:12.975 Removing: /var/run/dpdk/spdk_pid143903 00:36:12.975 Removing: /var/run/dpdk/spdk_pid144125 00:36:12.975 Removing: /var/run/dpdk/spdk_pid144334 00:36:12.975 Removing: /var/run/dpdk/spdk_pid144586 00:36:12.975 Removing: /var/run/dpdk/spdk_pid144600 00:36:12.975 Removing: /var/run/dpdk/spdk_pid14489 00:36:12.975 Removing: /var/run/dpdk/spdk_pid149000 00:36:12.975 Removing: /var/run/dpdk/spdk_pid149555 00:36:12.975 Removing: /var/run/dpdk/spdk_pid154018 00:36:12.975 Removing: /var/run/dpdk/spdk_pid156498 00:36:12.975 Removing: /var/run/dpdk/spdk_pid161780 00:36:12.975 Removing: /var/run/dpdk/spdk_pid167016 00:36:12.975 Removing: /var/run/dpdk/spdk_pid176122 00:36:12.975 Removing: /var/run/dpdk/spdk_pid183067 00:36:12.975 Removing: /var/run/dpdk/spdk_pid183124 00:36:12.975 Removing: /var/run/dpdk/spdk_pid201612 00:36:12.975 Removing: /var/run/dpdk/spdk_pid202076 00:36:12.975 Removing: /var/run/dpdk/spdk_pid202717 00:36:12.975 Removing: /var/run/dpdk/spdk_pid203208 00:36:12.975 Removing: /var/run/dpdk/spdk_pid203929 00:36:12.975 Removing: /var/run/dpdk/spdk_pid204394 00:36:12.975 Removing: /var/run/dpdk/spdk_pid204856 00:36:12.975 Removing: /var/run/dpdk/spdk_pid205521 00:36:12.975 Removing: /var/run/dpdk/spdk_pid20861 00:36:12.975 Removing: /var/run/dpdk/spdk_pid20863 00:36:12.975 Removing: /var/run/dpdk/spdk_pid209522 00:36:12.975 Removing: /var/run/dpdk/spdk_pid209886 00:36:12.975 Removing: /var/run/dpdk/spdk_pid215874 00:36:12.975 Removing: /var/run/dpdk/spdk_pid216054 00:36:12.975 Removing: /var/run/dpdk/spdk_pid21750 00:36:12.975 Removing: /var/run/dpdk/spdk_pid221848 00:36:12.975 Removing: /var/run/dpdk/spdk_pid226185 00:36:12.975 Removing: /var/run/dpdk/spdk_pid22641 00:36:12.975 Removing: /var/run/dpdk/spdk_pid23531 00:36:12.975 Removing: /var/run/dpdk/spdk_pid235840 00:36:12.975 Removing: /var/run/dpdk/spdk_pid236378 00:36:12.975 Removing: /var/run/dpdk/spdk_pid23991 00:36:12.975 Removing: /var/run/dpdk/spdk_pid24019 00:36:12.975 Removing: /var/run/dpdk/spdk_pid240553 00:36:12.975 Removing: /var/run/dpdk/spdk_pid240800 00:36:12.975 Removing: /var/run/dpdk/spdk_pid24325 00:36:12.975 Removing: /var/run/dpdk/spdk_pid24443 00:36:12.975 Removing: /var/run/dpdk/spdk_pid24446 00:36:12.975 Removing: /var/run/dpdk/spdk_pid244964 00:36:12.975 Removing: /var/run/dpdk/spdk_pid250577 00:36:12.975 Removing: /var/run/dpdk/spdk_pid253228 00:36:12.975 Removing: /var/run/dpdk/spdk_pid25332 00:36:12.975 Removing: /var/run/dpdk/spdk_pid26228 00:36:12.975 Removing: /var/run/dpdk/spdk_pid263114 00:36:12.975 Removing: /var/run/dpdk/spdk_pid27118 00:36:12.975 Removing: /var/run/dpdk/spdk_pid272233 00:36:12.975 Removing: /var/run/dpdk/spdk_pid273925 00:36:12.975 Removing: /var/run/dpdk/spdk_pid274765 00:36:12.975 Removing: /var/run/dpdk/spdk_pid27573 00:36:12.975 Removing: /var/run/dpdk/spdk_pid27575 00:36:12.975 Removing: /var/run/dpdk/spdk_pid27906 00:36:12.975 Removing: /var/run/dpdk/spdk_pid29015 00:36:12.975 Removing: /var/run/dpdk/spdk_pid290738 00:36:12.975 Removing: /var/run/dpdk/spdk_pid294541 00:36:12.975 Removing: /var/run/dpdk/spdk_pid297314 00:36:12.975 Removing: /var/run/dpdk/spdk_pid29969 00:36:12.975 Removing: /var/run/dpdk/spdk_pid305106 00:36:12.975 Removing: /var/run/dpdk/spdk_pid305112 00:36:12.975 Removing: /var/run/dpdk/spdk_pid3060 00:36:12.975 Removing: /var/run/dpdk/spdk_pid310053 00:36:12.975 Removing: /var/run/dpdk/spdk_pid312093 00:36:12.975 Removing: /var/run/dpdk/spdk_pid314402 00:36:12.975 Removing: /var/run/dpdk/spdk_pid315433 00:36:12.975 Removing: /var/run/dpdk/spdk_pid317440 00:36:12.975 Removing: /var/run/dpdk/spdk_pid318590 00:36:12.975 Removing: /var/run/dpdk/spdk_pid327168 00:36:12.975 Removing: /var/run/dpdk/spdk_pid327643 00:36:12.975 Removing: /var/run/dpdk/spdk_pid328272 00:36:12.975 Removing: /var/run/dpdk/spdk_pid330512 00:36:12.975 Removing: /var/run/dpdk/spdk_pid330964 00:36:12.975 Removing: /var/run/dpdk/spdk_pid331417 00:36:12.975 Removing: /var/run/dpdk/spdk_pid335373 00:36:13.234 Removing: /var/run/dpdk/spdk_pid335383 00:36:13.234 Removing: /var/run/dpdk/spdk_pid336860 00:36:13.234 Removing: /var/run/dpdk/spdk_pid337406 00:36:13.234 Removing: /var/run/dpdk/spdk_pid337411 00:36:13.234 Removing: /var/run/dpdk/spdk_pid38103 00:36:13.234 Removing: /var/run/dpdk/spdk_pid4056290 00:36:13.234 Removing: /var/run/dpdk/spdk_pid4057313 00:36:13.234 Removing: /var/run/dpdk/spdk_pid4058369 00:36:13.234 Removing: /var/run/dpdk/spdk_pid4058994 00:36:13.234 Removing: /var/run/dpdk/spdk_pid4059916 00:36:13.234 Removing: /var/run/dpdk/spdk_pid4060136 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4061097 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4061105 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4061451 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4062937 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4064391 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4064677 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4064959 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4065256 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4065385 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4065586 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4065829 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4066114 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4066826 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4069755 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4070027 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4070277 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4070281 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4070756 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4070765 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4071248 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4071335 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4071601 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4071738 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4071988 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4071996 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4072543 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4072790 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4073079 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4076943 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4081133 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4091379 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4091935 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4096376 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4096911 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4101205 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4106966 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4109710 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4119729 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4128503 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4130266 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4131170 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4148423 00:36:13.235 Removing: /var/run/dpdk/spdk_pid4152333 00:36:13.235 Removing: /var/run/dpdk/spdk_pid66749 00:36:13.235 Removing: /var/run/dpdk/spdk_pid71172 00:36:13.235 Removing: /var/run/dpdk/spdk_pid72737 00:36:13.235 Removing: /var/run/dpdk/spdk_pid74524 00:36:13.235 Removing: /var/run/dpdk/spdk_pid74654 00:36:13.235 Removing: /var/run/dpdk/spdk_pid74766 00:36:13.235 Removing: /var/run/dpdk/spdk_pid74994 00:36:13.235 Removing: /var/run/dpdk/spdk_pid75488 00:36:13.235 Removing: /var/run/dpdk/spdk_pid77804 00:36:13.235 Removing: /var/run/dpdk/spdk_pid78556 00:36:13.235 Removing: /var/run/dpdk/spdk_pid79036 00:36:13.235 Removing: /var/run/dpdk/spdk_pid81085 00:36:13.494 Removing: /var/run/dpdk/spdk_pid81565 00:36:13.494 Removing: /var/run/dpdk/spdk_pid82146 00:36:13.494 Removing: /var/run/dpdk/spdk_pid86257 00:36:13.494 Removing: /var/run/dpdk/spdk_pid8634 00:36:13.494 Removing: /var/run/dpdk/spdk_pid91751 00:36:13.494 Removing: /var/run/dpdk/spdk_pid91752 00:36:13.494 Removing: /var/run/dpdk/spdk_pid91753 00:36:13.494 Removing: /var/run/dpdk/spdk_pid95469 00:36:13.494 Clean 00:36:13.494 04:23:12 -- common/autotest_common.sh@1453 -- # return 0 00:36:13.494 04:23:12 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:13.494 04:23:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:13.494 04:23:12 -- common/autotest_common.sh@10 -- # set +x 00:36:13.494 04:23:12 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:13.494 04:23:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:13.494 04:23:12 -- common/autotest_common.sh@10 -- # set +x 00:36:13.494 04:23:12 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:13.494 04:23:12 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:13.494 04:23:12 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:13.494 04:23:12 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:13.494 04:23:12 -- spdk/autotest.sh@398 -- # hostname 00:36:13.494 04:23:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:13.752 geninfo: WARNING: invalid characters removed from testname! 00:36:35.689 04:23:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:37.067 04:23:36 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:38.972 04:23:37 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:40.876 04:23:39 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:42.780 04:23:41 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:44.685 04:23:43 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:46.589 04:23:45 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:46.589 04:23:45 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:46.589 04:23:45 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:46.589 04:23:45 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:46.589 04:23:45 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:46.589 04:23:45 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:46.589 + [[ -n 3979702 ]] 00:36:46.589 + sudo kill 3979702 00:36:46.599 [Pipeline] } 00:36:46.614 [Pipeline] // stage 00:36:46.619 [Pipeline] } 00:36:46.633 [Pipeline] // timeout 00:36:46.638 [Pipeline] } 00:36:46.651 [Pipeline] // catchError 00:36:46.656 [Pipeline] } 00:36:46.670 [Pipeline] // wrap 00:36:46.676 [Pipeline] } 00:36:46.688 [Pipeline] // catchError 00:36:46.697 [Pipeline] stage 00:36:46.699 [Pipeline] { (Epilogue) 00:36:46.712 [Pipeline] catchError 00:36:46.714 [Pipeline] { 00:36:46.728 [Pipeline] echo 00:36:46.730 Cleanup processes 00:36:46.735 [Pipeline] sh 00:36:47.022 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:47.022 348187 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:47.035 [Pipeline] sh 00:36:47.320 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:47.320 ++ grep -v 'sudo pgrep' 00:36:47.320 ++ awk '{print $1}' 00:36:47.320 + sudo kill -9 00:36:47.320 + true 00:36:47.332 [Pipeline] sh 00:36:47.616 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:59.839 [Pipeline] sh 00:37:00.208 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:00.208 Artifacts sizes are good 00:37:00.223 [Pipeline] archiveArtifacts 00:37:00.230 Archiving artifacts 00:37:00.356 [Pipeline] sh 00:37:00.641 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:00.656 [Pipeline] cleanWs 00:37:00.667 [WS-CLEANUP] Deleting project workspace... 00:37:00.667 [WS-CLEANUP] Deferred wipeout is used... 00:37:00.674 [WS-CLEANUP] done 00:37:00.676 [Pipeline] } 00:37:00.693 [Pipeline] // catchError 00:37:00.704 [Pipeline] sh 00:37:00.987 + logger -p user.info -t JENKINS-CI 00:37:00.995 [Pipeline] } 00:37:01.008 [Pipeline] // stage 00:37:01.013 [Pipeline] } 00:37:01.026 [Pipeline] // node 00:37:01.031 [Pipeline] End of Pipeline 00:37:01.085 Finished: SUCCESS